text
stringlengths
100
500k
subset
stringclasses
4 values
Kinetic and temporospatial gait parameters in a heterogeneous group of dogs Washington T. Kano1, Sheila C. Rahal1, Felipe S. Agostinho1, Luciane R. Mesquita1, Rogerio R. Santos1, Frederico O. B. Monteiro2, Maira S. Castilho1 & Alessandra Melchert3 BMC Veterinary Research volume 12, Article number: 2 (2016) Cite this article A prime concern of the gait analysis in a heterogeneous group of dogs is the potential influence of factors such as individual body size, body mass, type of gait, and velocity. Thus, this study aimed to evaluate in a heterogeneous group of dogs a possible correlation of the stride frequency with kinetic and temporospatial variables, as well as the percentage of body weight distribution (%BWD), and compare symmetry index (SI) between trotting and walking dogs. Twenty-nine clinically healthy dogs moving in a controlled velocity were used. The dogs were organized into two groups based on duty factor. Group 1 comprised 15 walking dogs, aged from 9 months to 8 years and weighing about 22.3 kg. Group 2 had 14 trotting dogs, aged from 1 to 6 years and weighing about 6.5 kg. The kinetic data and temporospatial parameters were obtained using a pressure-sensing walkway. The velocity was 0.9–1.1 m/s. The peak vertical force (PVF), vertical impulse (VI), gait cycle time, stance time, swing time, stride length, and percentages of body weight distribution among the four limbs were determined. For each variable, the SIs were calculated. Pearson's coefficient was used to evaluate correlation between stride frequency and other variables, initially in each group and after including all animals. Except for the %BWD (approximately 60 % for the forelimbs and 40 % for the hind limbs), all other parameters differed between groups. Considering each Group individually a strong correlation was observed for most of the temporospatial parameters, but no significant correlation occurred between stride frequency and PVF, and stride frequency and %BWD. However, including all dogs a strong correlation was observed in all temporospatial parameters, and moderate correlation between stride frequency and VI, and weak correlation between stride frequency and PVF. There was no correlation between stride frequency and %BWD. Groups 1 and 2 did not differ statistically in SIs. In a heterogeneous group of dogs conducted at a controlled velocity, the %BWD and most of SIs presented low variability. However, %BWD seems to be the most accurate, since factors such as the magnitude of the variables may influence the SIs inducing wrong interpretation. Based on results obtained from correlations, the standardization of stride frequency could be an alternative to minimize the variability of temporospatial parameters. A prime concern of the gait analysis using temporospatial parameters and kinetic data in a heterogeneous group of dogs is the potential influence of factors such as individual body size, body mass, type of gait, and velocity [1–5]. However, temporospatial parameters and kinetic data are important for identification and understanding of orthopedic problems, and for evaluating treatment response [6–8]. In addition, spatiotemporal characteristics have be used to evaluate gait in dogs with spinal cord disease, and may be useful as outcome measures for functional analysis in these patients [9, 10]. Given the relationship of limb length with the values of stance time, swing time, gait cycle time and stride length, the ratio between values can be changed by increasing stride frequency or the type of locomotion [5, 8]. This dynamic hampers the use of these parameters in comparisons due to the variability of the data. To walk at the same velocity as large dogs, small dogs require a higher stride frequency [3, 5]. Besides interfering directly in temporal values, such an increase in stride frequency may modify the ratio between stance time and swing time [1]. On the other hand, kinetic variables such as the PVF and VI may also be influenced by velocity and acceleration, body weight, animal conformation, and musculoskeletal structure [2, 6, 7, 11, 12]. One strategy to minimize the variability is to normalize the vertical force with canine body weight [1, 3, 5, 6, 12], but differences in individual size and, consequently, the relative velocity can still interfere with the values [3, 4]. However, a linear relationship may exist between kinetic variables and stride frequency that it is independent of the animal's size and gait velocity [5]. In addition, calculus and normalization can be performed in order to minimize variations and provide parameters more apt for comparisons [1, 3, 13]. An index of symmetry or asymmetry can be used as an indicator of limb function while different evaluation methods have been employed in dogs [5, 8, 13–18]. In healthy animals it is expected that values of the variables obtained from the right and left forelimbs or between the right and left hind limbs are similar, consequently yielding a SI near 0, or perfect symmetry [8, 18]. Thus, the present study aimed to evaluate in a heterogeneous group of dogs a possible correlation of the stride frequency with kinetic and temporospatial variables, as well as the %BWD, and compare SI between trotting and walking dogs. The first hypothesis was that the stride frequency would have a linear correlation with the temporospatial parameters such as time and % of stance, time and % of swing, gait cycle time, and stride length. The second hypothesis was that the % BWD and SI would show a low variability in a heterogeneous group of dogs, and would not be affected by the stride frequency. Dog selection This study was approved by the Ethics Committee of School of Veterinary Medicine and Animal Science – Univ Estadual Paulista (UNESP) (no. 27/2014-CEUA). A signed Informed Consent Form was requested from each dog's owner, prior to entering the study. Twenty-nine clinically healthy dogs moving in a controlled velocity were used. The dogs were organized into two groups based on duty factor. Group 1 comprised 15 walking dogs (duty factor >0.5), eight males and seven females, aged from 9 months to 8 years (mean ± SD, 3.3 y ± 2) and weighing about 22.3 kg (±10 SD). The dog breeds were Labrador retriever (n = 3), Pointer (n = 3), and eight crossbreeds. Group 2 had 14 trotting dogs (duty factor <0.5), six males and eight females, aged from 1 to 6 years (mean ± SD, 3.1 y ± 1.6) and weighing about 6.5 kg (±4.7 SD). The dog breeds were Shitsu (n = 3), Poodle (n = 2), Lhasa apso (n = 1), Dachshund (n = 1) and seven crossbreeds. The dogs were judged to be healthy on account of results of complete physical and orthopedic examinations, and radiographic exams of the hip and elbow joints. Before data collection, the dogs were familiarized with the environment and pressure-sensing walkway, performing approximately five to seven practice trials. Each dog was weighed on the same electronic scale immediately before data collection. The kinetic and temporospatial parameters of gait were measured on a 1.95 m x 0.45 m pressure-sensing walkway (Walkway High Resolution HRV4; Tekscan, South Boston, Massachusetts, USA), whose sensors were equilibrated and calibrated as specified by the manufacturer. Designated software (Walkway 7.0 software; Tekscan Inc., South Boston, Massachusetts, USA) was used for data acquisition and analysis. The dogs were guided across the pressure-sensing walkway in a straight line on a loose leash to the left of the handler. For both groups, the velocity was maintained between 0.9 and 1.1 m/s, and the acceleration was between−0.2 and 0.2 m/s2. For each dog, an average of 20 trials was obtained, and first five valid trials were selected. A trial was considered valid if the four limbs had contacted the walkway surface during each gait cycle without the dog turning the head or pulling on the leash. The temporospatial parameters evaluated for each limb were the gait cycle time (s), stance time (s) swing time (s) and stride length (m), as previously described [19]. The stance time percentage was determined from the following formula: (stance time/gait cycle time) x 100. The swing time percentage was calculated as follows: (swing time/gait cycle time) x 100. The stride corresponded to the distance between two consecutive ground contacts by the same limb. The duty factor was established by dividing stance time by gait cycle time. The stride frequency expressed in cycles per minute was defined as follows: (1/stance time) x 60. The PVF and the VI were the kinetic parameters evaluated. The PVF and VI were normalized to the dog's body weight and represented as a percentage of body weight. The % BWD among the four limbs was calculated as follows: (PVF of the limb/total PVF of the four limbs) x 100. The SI between right and left side for both forelimbs and hind limbs for each kinetic and temporospatial variable was calculated as previously described [14] using the following equation: $$ \mathrm{S}\mathrm{I}=\frac{1}{2}\left(\frac{\mathrm{RS}-\mathrm{L}\mathrm{S}}{\mathrm{RS}+\mathrm{L}\mathrm{S}}\right)\times 100 $$ RS: LS: The value of SI = 0 indicates perfect gait symmetry. Values of SI > 0 indicate asymmetry for the right limb, and values of SI < 0 indicate asymmetry for the left limb The normality of data was checked by the Shapiro-Wilk test. To compare the temporospatial parameters and the kinetic data between groups, the F-test was used followed by the Student's t test. To evaluate the SIs between groups Mann–Whitney test was used. Differences were considered significant at p < 0.05. Pearson's correlation coefficients (r) were used to evaluate the linear relationships between the stride frequency and the other variables, initially in each group and after including all animals. The correlations were deemed significant at the 5 % probability level. The kinetic and temporospatial values were expressed as the means ± standard deviation, and the inter-dogs coefficients of variation (CV) were calculated. The SIs were expressed as median, first quartile and third quartiles. The dogs of Group 1 (walking) and Group 2 (trotting) showed significant differences in the kinetic and temporospatial parameters in both forelimbs (Table 1) and hind limbs (Table 2). However, no difference was observed for % BWD between groups. The mean %BWD including all dogs were 29.9 and 20.1 for forelimbs and hind limbs, respectively. Representative recordings of a dog of each Group on a pressure-sensing walkway is in Fig. 1. Table 1 Comparison of the kinetic data and temporospatial parameters of the forelimbs between Groups 1 (walking) and 2 (trotting) Table 2 Comparison of the kinetic data and temporospatial parameters of the hind limbs between Groups 1 (walking) and 2 (trotting) Representative recordings of a dog of Group 1 (a: walking) and Group 2 (b: trotting) on a pressure-sensing walkway The linear correlation values between stride frequency and kinetic and temporospatial variables for the dogs of Group 1 (walking), Group 2 (trotting) and including all dogs are described in Tables 3, 4 and 5, respectively. Considering each Group individually a strong correlation was observed for most of the temporospatial parameters, but no significant correlation occurred between stride frequency and PVF, and stride frequency and %BWD. However, including all dogs a strong correlation was observed in all temporospatial parameters, and moderate correlation between stride frequency and VI, and weak correlation between stride frequency and PVF. There was no correlation between stride frequency and % BWD. Table 3 Pearson correlation coefficient and P value of the correlations between kinetic data or temporospatial parameters and stride frequency of the forelimbs and hind limbs in dogs of Group 1 (walking) Table 4 Pearson correlation coefficient and P value of the correlations between kinetic data or temporospatial parameters and stride frequency of the forelimbs and hind limbs in dogs of Group 2 (trotting) Table 5 Pearson correlation coefficient and P value of the correlations between kinetic data or temporospatial parameters and stride frequency of the forelimbs and hind limbs including both groups Groups 1 and 2 did not differ statistically in SIs. For both groups 1 and 2, median, first quartile and third quartiles of SIs are described in Tables 6 and 7, respectively, for the forelimbs and hind limbs. Box plots with median, interquartile range, and maximum and minimum values are in Figs. 2 and 3, respectively, for the forelimbs and hind limbs. Table 6 Comparison of the symmetry indices (%) of the kinetic data and temporospatial parameters of the forelimbs between Groups 1 (walking) and 2 (trotting) Table 7 Comparison of the symmetry indices (%) of the kinetic data and temporospatial parameters of the hind limbs between Groups 1 (walking) and 2 (trotting) Boxplot of the kinetic data and temporospatial parameters of the forelimbs including both groups Boxplot of the kinetic data and temporospatial parameters of the hind limbs including both groups Several variables must be controlled to avoid variability in kinetic data and temporospatial parameters, including velocity and type of locomotion [1, 17, 20, 21], stance time [21], training and habituation [22], body size, conformation, and body weight [1, 2, 4, 5]. In addition, most of the kinetic studies have evaluated dogs that were walking or trotting, due to the symmetry and convenience of these types of locomotion [5, 7, 16, 17]. In the present study, the velocity was maintained 0.9–1.1 m/s and the acceleration between−0.2 and 0.2 m/s2 determined by pressure-sensitive walkway. A training program was not performed in the present study. Because the data are more easily collected using a pressure-sensing walkway compared to a single force plate, the measurements are generally obtained after familiarization to pressure-sensing walkway than a training program [5, 16]. The center of gravity in dogs is located next to the forelimbs possibly near the base of the heart, so that in a healthy dog 60 % of the weight is carried by the forelimbs [23]. The body weight distributions reported in a study of healthy dogs walking on a pressure-sensing walkway, were 60.7 and 39.3 % for small dogs and 61.7 and 38.3 % for large dogs, respectively, for the forelimbs and hind limbs, without influence of body weight or size [5]. In the present study, the mean body weight distributions were similar, being 30 % (G1) and 29.7 % (G2) for each forelimb, and 20 % (G1) and 20.3 % (G2) for each hind limb. Thus, the %BWD may be applicable to comparisons in a heterogeneous group of dogs, because regardless of the body weight, body size, and gait types the values are maintained. Since the velocity was controlled in the present study, the stride frequency was used to calculate the Pearson correlation coefficients. Besides, the stride frequency is an objective variable calculated by the system, and errors that may occur with tape measurements of the limbs are avoided. The Pearson correlation revealed a strong correlation in all temporospatial parameters analyzing all dogs as unique group, more than analyzing each group individually; suggesting that the gait type did not interfered in this correlation. The Pearson correlation revealed a strong negative correlation between stride frequency and most temporospatial parameters. Therefore, the values of stance time, swing time, gait cycle time, stride length decreased as stride frequency increased. A study comparing small and large dogs walking at their preferred velocity on a pressure-sensing walkway also reported that most of the temporospatial parameters (gait cycle time, stance time and swing time) were lower for small dogs [5]. On the other hand, a strong positive correlation with swing percentage and a highly negative correlation with stance time percentage were found. Thus, as stride frequency increases, the limb spends proportionately less time on the ground and more time off the ground. Conversely, it was reported that in quadrupeds the swing phase diminishes with increased velocity whereas during trotting and galloping the parameter is quite constant [24]. With respect to the kinetic parameters, the PVF and VI showed, respectively, low correlation and moderate coefficient values indicating a weaker relationship with stride frequency. A previous study using healthy dogs found that PVF was elevated as the velocity increased and decreased as the stance time increased, while VI decreased as the velocity increased and increased as stance time increased [21]. Thus, other factors may influence PVF and VI, and these parameters may not useful in a heterogeneous group of dogs. On the other hand, no significant correlation was observed between stride frequency and the %BWD, suggesting that the latter parameter was not influenced by the stride frequency. Symmetry or asymmetry indices or symmetry rates have been used to evaluate kinetic data and temporospatial parameters in dogs walking or trotting over a pressure-sensing walkway, aiming to characterize healthy dogs of the same size or different sizes [5, 17], or to distinguish between lame dogs and clinically healthy dogs [16, 18]. This same strategy was employed in the present study in order to assess the validity of SI in heterogeneous group of dogs, but under controlled velocity. In both groups the SI of all variables showed median values nearly 0 and asymmetry less than 4 % showing no differences between Groups 1 and 2. These data suggest that these indices could be utilized to evaluate the gait in a heterogeneous group of dogs. However, some facts can limit the use SI for comparison between groups. A major problem with the SI is that precision depends on the relative magnitude of the evaluated variable [14]. If the magnitude of the variable itself is quite small, such as temporal gait variables in trotting dogs, even small differences may result in high value of SI. Probably, these differences are clinically insignificant, or may be resultant of capture artefacts. On the other, SI of the gait cycle time could be used as an indicative of capture artefacts, since at a constant velocity is not expected asymmetry in this variable. As an example, the gait cycle time of the forelimb in Group 2 showed 2.58 % of asymmetry (third quartile), which represented a difference of approximately 0.04 s of the mean value of this variable (0.44 s). This difference in mean value of stance phase (0.21 s) can result in a SI of 6 %, and if applied in the dog that showed the lower stance phase (0.13 s) the SI will be 9.1 %. This could explain the high variation of temporal variable SI as well as the SI of VI (total force applied overtime) in Group 2. In addition, PVF and %BW showed equal values of SIs with a median value near 0. However, a high maximum values can be observed in the boxplots, especially in Group 2. The magnitude of the variable could be a reason to the higher variation in Group 2, but other factors such as velocity variations not evident in trials [14] and no previous training [22] must be considered. In a heterogeneous group of dogs conducted in a controlled velocity, the %BWD and most of SIs presented low variability. However, %BWD seems to be the most accurate, since factors such as the magnitude of the variables may influence the SIs inducing wrong interpretation. Based on results obtained from correlations, the standardization of stride frequency could be an alternative to minimize the variability of temporospatial parameters. The identification of a linear correlation between stride frequency and other variables may be an option for future studies aiming a determination of a correction factor. Therefore, of all of the studied variables the %BDW is the most useful and accurate for clinicians to evaluate a heterogeneous group of dogs since this variable is not influenced by stride frequency. %BWD: Percentage of body weight distribution PVF: Peak vertical force VI: Vertical impulse Symmetry index SIs: Symmetry indices Bertram JEA, Lee DV, Case HN, Todhunter RJ. Comparison of the trotting gaits of Labrador retrievers and greyhounds. Am J Vet Res. 2006;61:832–83. Bockstahler BA, Skalicky M, Peham C, Muller M, Lorinson D. Reliability of ground reaction forces measured on a treadmill system in healthy dogs. Vet J. 2007;173:373–8. Mölsa SH, Hielm-Björkman AK, Laitinen-Vapaavuori OM. Force platform analysis in clinically healthy rottweilers: comparison with Labrador retrievers. Vet Surg. 2010;39:701–7. Voss K, Galeandro L, Wiestner T, Haessig M, Montavon PM. Relationships of body weight, body size, subject velocity and vertical ground reaction forces in trotting dogs. Vet Surg. 2010;39:863–9. Kim J, Kazmierczak KA, Breuer GJ. Comparison of temporospatial and kinetic variables of walking in small and large dogs on a pressure-sensing walking. Am J Vet Res. 2011;72:1171–7. McLaughlin RM. Kinetic and kinematic gait analysis in dogs. Vet Clin North Am, Small Anim Pract. 2001;31:193–201. Weigel JP, Arnold G, Hicks DA, Millis DL. Biomechanics of rehabilitation. Vet Clin Small Anim. 2005;35:1255–85. Voss K, Imhof J, Kaestner S, Montavon PM. Force plate gait analysis at the walk and trot in dogs with low-grade hindlimb lameness. Vet Comp Orthop Traumatol. 2007;20:299–304. PubMed CAS Google Scholar Gordon-Evans WJ, Evans RB, Conzemius MG. Accuracy of spatiotemporal variables in gait analysis of neurologic dogs. J Neurotrauma. 2009;26:1055–60. Gordon-Evans WJ, Evans RB, Knap KE, Hildreth JM, Pinel CB, Imhoff DJ, et al. Characterization of spatiotemporal gait characteristics in clinically normal dogs and dogs with spinal cord disease. Am J Vet Res. 2009;70:1444–9. Budsberg SC, Verstraete MC, Soutas-Little RW. Force plate analysis of the walking gait in healthy dogs. Am J Vet Res. 1987;48:915–8. DeCamp CE. Kinetic and kinematic gait analysis and the assessment of lameness in the dog. Vet Clin North Am Small Anim Pract. 1997;27:825–40. Gordon-Evans WJ. Gait analysis. In: Tobias KM, Johnston SA, editors. Veterinary surgery: small animal. St. Louis: Mo.: Elsevier; 2012. p. 1190–6. Budsberg SC, Jevens DJ, Brown J, Foutz TL, DeCamp CE, Reece L. Evaluation of limb symmetry indices, using ground reaction forces in healthy dogs. Am J Vet Res. 1993;54:1569–74. Fanchon L, Grandjean D. Accuracy of asymmetry indices of ground reaction forces for diagnosis of hind limb lameness in dogs. Am J Vet Res. 2007;68:1089–94. LeQuang T, Maitre P, Roger T, Viguier E. Is a pressure walkway system able to highlight a lameness in dog? J Anim Vet Adv. 2009;8:1936–44. Light VA, Steiss JE, Montgomery RD, Rumph PF, Wright JC. Temporal-spatial gait analysis by use of a portable walkway system in healthy Labrador retrievers at a walk. Am J Vet Res. 2010;71:997–1002. Oosterlinck M, Bosmans T, Gasthuys F, Polis I, Van Ryssen B, Dewulf J, et al. Accuracy of pressure plate kinetic asymmetry indices and their correlation with visual gait assessment scores in lame and nonlame dogs. Am J Vet Res. 2011;72:820–5. Agostinho FS, Rahal SC, Araújo FAP, Conceição RT, Hussni CA, El-Warrak AO, et al. Gait analysis in clinically healthy sheep from three different age groups using a pressure-sensitive walkway. BMC Vet Res. 2012;8:1–7. Riggs CM, DeCamp CE, Soutas-Litlle RW, Braden TD, Richter MA. Effects of subject velocity on force plate-measured ground reaction forces in healthy greyhounds at the trot. Am J Vet Res. 1993;54:1523–6. McLaughlin RMJ, Roush JK. Effects of subject stance time and velocity on ground reaction forces in clinically normal greyhounds at the walk. Am J Vet Res. 1994;55:1672–6. Fanchon L, Grandjean D. Habituation of healthy dogs to treadmill trotting: repeatability assessment of vertical ground reaction force. Res Vet Sci. 2009;87:135–9. Nunamaker DM, Blauner PD. Normal and abnormal gait. In: Newton CD, Nunamaker DM, editors. Textbook of small animal orthopaedics. New York: International Veterinary Information Service; 1985. p. 1–15. Vilensky JA. Locomotor behavior and control in human and non-human primates: comparisons with cats and dogs. Neurosci Biobehav Rev. 1987;11:263–74. The authors are grateful to FAPESP (The State of São Paulo Research Foundation −09/182997-7), CNPq (National Council for Scientific and Technological Development – 300710/2013-5), and CAPES – PROCAD-NF No. 21/2009. Department of Veterinary Surgery and Anesthesiology, School of Veterinary Medicine and Animal Science – Univ Estadual Paulista (UNESP), Botucatu, SP, Brazil Washington T. Kano, Sheila C. Rahal, Felipe S. Agostinho, Luciane R. Mesquita, Rogerio R. Santos & Maira S. Castilho Instituto de Saúde e Produção Animal, Universidade Federal Rural da Amazônia, Belém do Pará, Brazil Frederico O. B. Monteiro Department of Veterinary Clinic, School of Veterinary Medicine and Animal Science – Univ Estadual Paulista (UNESP), Botucatu, SP, Brazil Alessandra Melchert Washington T. Kano Sheila C. Rahal Felipe S. Agostinho Luciane R. Mesquita Rogerio R. Santos Maira S. Castilho Correspondence to Sheila C. Rahal. The authors have declared that no competing interests exist. WTK, SCR and FSA conceived and designed the study; FOBM helped draft the manuscript, and LRM, MSC and RRS helped collected the data; AM helped with statistics; all authors read, contributed to and approved the final manuscript. Kano, W.T., Rahal, S.C., Agostinho, F.S. et al. Kinetic and temporospatial gait parameters in a heterogeneous group of dogs. BMC Vet Res 12, 2 (2016). https://doi.org/10.1186/s12917-015-0631-2 Objective measurement
CommonCrawl
Fluidization of nanopowders: a review J. Ruud van Ommen1, Jose Manuel Valverde2 & Robert Pfeffer3 Journal of Nanoparticle Research volume 14, Article number: 737 (2012) Cite this article Nanoparticles (NPs) are applied in a wide range of processes, and their use continues to increase. Fluidization is one of the best techniques available to disperse and process NPs. NPs cannot be fluidized individually; they fluidize as very porous agglomerates. The objective of this article is to review the developments in nanopowder fluidization. Often, it is needed to apply an assistance method, such as vibration or microjets, to obtain proper fluidization. These methods can greatly improve the fluidization characteristics, strongly increase the bed expansion, and lead to a better mixing of the bed material. Several approaches have been applied to model the behavior of fluidized nanopowders. The average size of fluidized NP agglomerates can be estimated using a force balance or by a modified Richardson and Zaki equation. Some first attempts have been made to apply computational fluid dynamics. Fluidization can also be used to provide individual NPs with a thin coating of another material and to mix two different species of nanopowder. The application of nanopowder fluidization in practice is still limited, but a wide range of potential applications is foreseen. Nanoscience has attracted much attention from researchers over the past decades, but true nanotechnology has only more recently begun to bestow promising results for a wide range of applications. It has brought advances such as energy-efficient LED lighting (Krames et al. 2007) and improved catalysts (Bell 2003; Li and Somorjai 2010), and is beginning to deliver medical breakthroughs (Riehemann et al. 2009). Nanotechnology encompasses the study and application of objects with at least one dimension smaller than 100 nm. Nanoparticles (NPs)—with all three dimensions below 100 nm—have been widely studied over the past two decades, since their large surface area per unit mass leads to unique chemical, electro-magnetic, optical, and other properties. For many practical applications of NPs, it is required to have large amounts of the material. Many of the synthesis and processing techniques for NPs that are currently under research—most of them operating in the liquid phase—are just aimed at small quantities. We think that it is important to consider the potential for scaling up right from the start; this is typically easier in the gas phase than in the liquid phase. Gas phase methods offer inherent advantages such as the absence of solvent waste, less separation problems, the feasibility of continuous processing as opposed to batch processing, and the versatility with respect to particle material and size and structure (Kruis et al. 1998; El-Shall and Schmidt-Ott 2006). For the processing of micron-sized particles, a widely applied technique is fluidization: suspending the particles in an upward gas stream with such a velocity that drag and gravity are in equilibrium. Although it may sound counterintuitive, nanopowders can be fluidized as well. In contrast to particles of say 200 μm, however, NPs are not fluidized individually but as agglomerates: very dilute clusters of around 200 μm consisting of ~1010 primary particles. The fluidization of nanopowders has attained increasing attention in the past decade. The objective of this article is to review the developments in the field. The agglomerating nature of NPs in the gas phase Forces between NPs The three main interactions between particles in the gas phase are van der Waals interaction, liquid bridging, and electrostatic interaction (Seville et al. 2000). Capillary or liquid bridges can be formed due to liquid that is adsorbed on the particle surface. When these bridges are formed, they normally dominate the interaction (see Fig. 1), but this is strongly dependant on the presence of liquid and the contact angle. The influence of capillary bridging on NP fluidization has not yet been studied in detail; in most cases, the van der Waals forces are assumed to be most important. The electrostatic charge strongly depends on previous interaction with other materials (tribocharging) and is typically less relevant at this small scale. It can, however, play an important role as a force between agglomerates. The main forces between two silica particles of 10 nm as a function of the interparticle distance. All forces are normalized by dividing them by the gravity forces on a single particle. The capillary force is given for water; for other liquids, this force is typically lower. The van der Waals force depends on the surface roughness, as shown by the curves for a smooth surface and for surface asperities. Models and constants from Butt and Kappl (2010) In the liquid phase, several mechanisms can overpower the van der Waals forces and prevent clustering of NPs, e.g., double layers formed by an electrolyte and steric hindrance by dissolved polymers. In the gas phase, separation mechanisms are less widespread, and the Hamaker constant—determining the magnitude of the van der Waals force—is in general larger than in the liquid phase (Butt and Kappl 2010). Therefore, NPs in the gas phase will typically have the tendency to agglomerate, unless they are charged. The nature of the particle surfaces will influence the van der Waals forces between the particles in different ways. First, the presence of a different material will lead to a different Hamaker constant. Second, the surface roughness might be changed, which also influences the van der Waals forces, as illustrated in Fig. 1. The van der Waals force between particles (diameter d p) with asperities of size r asp is given by Castellanos (2005): $$ F_{\text{vdW}} = \frac{{A_{\text{H}} d_{\text{p}}^{3} }}{{12(x + r_{\text{asp}} )^{2} (x + r_{\text{asp}} + d_{\text{p}} )^{2} }} $$ where A H is the Hamaker constant and x is the surface–surface distance. This equation is often simplified as: $$ F_{\text{vdW}} = \frac{{A_{\text{H}} d_{\text{p}} }}{{12\;x^{2} }} $$ The formed agglomerates—in which the particle–particle bonds are not permanent—should be distinguished from aggregates, in which the particles are bound more strongly by solid-state necks (Teleki et al. 2008b). However, in many production processes, such as the widely used flame synthesis, high temperatures are involved that lead to indestructible aggregates of NPs by fusing of the contacts (Seipenbusch et al. 2010); these aggregates are typically of the order of one micron or smaller. Some authors use the term soft agglomerates versus hard agglomerates instead of agglomerates versus aggregates (Nichols et al. 2002), while others use the terms interchangeably. The agglomerating nature of NPs in the gas phase is not just detrimental: it actually makes it possible to process large amounts of nanoparticulate material in small volumes. The fractal morphology of NP agglomerates The nature of NP agglomerates has been widely studied outside the fluidization literature. With the introduction of the concept of fractal geometry by Mandelbrot (1982), a proper way evolved to describe these agglomerates (Bushell et al. 2002). A fractal object shows self-similarity under transformation of scale (e.g., changing the magnification of a microscope). The number of particles in an agglomerate N scales as (Friedlander 2000): $$ N\sim \left( {\frac{{r_{\text{aggl}} }}{{r_{\text{part}} }}} \right)^{D} $$ where r aggl is the agglomerate radius, r part is the particle radius, and D is the fractal dimension. For a compact agglomerate D approaches 3, but NP agglomerates are typically more dilute with a fractal dimension D < 3. Forrest and Witten Jr. (1979) were the first to report the fractal nature of NP agglomerates. Later, it was shown that the detailed chemical nature of the NPs has little influence on the resulting agglomerates, but that the formation process does have a large effect (Lin et al. 1989; Schaefer 1989). Two general classes of agglomeration were distinguished, both starting from single particles: particle–cluster agglomeration and cluster–cluster agglomeration. Note that most authors describing these mechanisms use the term aggregation rather than agglomeration. In the case of particle–cluster agglomeration, the clusters, once formed, no longer move and all agglomeration is due to accretion of single particles. In the case of cluster–cluster agglomeration, the clusters themselves continue to move, collide, and form yet larger clusters. This yields a very complex distribution of clusters of different sizes. Within each class, three different regimes can be distinguished: reaction-limited agglomeration (RLA), diffusion-limited agglomeration (DLA), and ballistic agglomeration (BA). In case of RLA, there is some form of repulsive interaction between approaching particles, so that only a small portion of the collision leads to agglomeration. In case of DLA or BA, every collision results in particles or clusters sticking together. In DLA, the particles (or clusters) experience Brownian motion, whereas in BA they follow linear trajectories. Each class and regime leads to a specific morphology and fractal dimension of the agglomerate, as shown in Fig. 2. Kinetic growth models in a 2D embedding space. The mass fractal dimension D of their 3D analogs are given (based on Friedlander (2000)) Nam et al. (2004) were the first to experimentally estimate the fractal dimension of fluidized NP agglomerates, based on earlier work on fine powders by Valverde et al. (2001b). They found fractal dimensions around 2.57, close to the value of 2.5 that was earlier found from simulations for particle–cluster DLA. Also, the structure found from TEM analysis by Wang et al. (2002) (see Fig. 3a) shows the best agreement with the simulated structure for particle–cluster DLA. It is, however, remarkable that this is the prevailing mechanism and not cluster–cluster explanation. It might be due to the fact that "simple agglomerates" (small agglomerates, see below) are already formed before fluidization, combining to larger agglomerates during fluidization. Illustration of the multistage agglomerate structure obtained by ex-situ analysis. a TEM image of a network of silica NPs. b SEM image of a simple agglomerate or sub-agglomerate built up from these networks. c SEM image of a complex agglomerate consisting of several sub-agglomerates (reprinted from Wang et al. (2002) with permission from Elsevier) Wang et al. (2002) analyzed more in detail the fluidization and agglomerate structure of six kinds of silica powders, with primary particles size from 7 to 16 nm. By applying the Richardson–Zaki (R–Z) equation to bed expansion measurement, they determined the average agglomerate size to be 230–330 μm; the void fraction is as high as 98–99%. Wang et al. (2006b) argued that this direct application of the R–Z equation may lead to an overestimation of the mean terminal velocity of the agglomerates, and thus in an overestimation of the size and/or an underestimation of the agglomerate voidage. Wang et al. (2002) also reported that the agglomerates have a multistage structure. They show using TEM that on the smallest scale silica NPs form 3D netlike structures (Fig. 3a). These netlike structures, with sizes around 1 μm, may be hold together by van der Waals forces, but the particles can also be connected by solid inter-particle necks, depending on the method used to synthesize the particles. The netlike structures coalesce into larger conglomerations, with the shape of a single sphere or ellipsoid, which they call "simple agglomerate." These simple agglomerates typically have sizes of 1–100 micron (see Fig. 3b). However, this size range is too small in comparison with agglomerate diameters determined from fluidization experiments. Nam et al. (2004) aspirated samples of silica nanoagglomerates at different heights out of their expanded fluidized bed and examined them under the SEM. The agglomerate sizes averaged only around 30 μm, and the agglomerates were very porous and fragile. It appeared that the larger fluidized agglomerates probably were broken down into smaller simple agglomerates during their removal from the bed and/or during sample preparation for the SEM. Wang et al. (2002) concluded that simple agglomerates should form complex agglomerates during fluidization, with sizes ranging from 200–400 μm. They also show such agglomerates using TEM (see Fig. 3c), but it is uncertain whether these agglomerates exactly look like the ones inside the fluidized bed. Wang et al. (2002) did not speculate whether only the netlike structure has a fractal nature or that this is also found at larger scales. Wang et al. (2006b) put forward three critical remarks about the correctness of the results of Wang et al. (2002). First, the samples could be increasingly consolidated if they were left inside the bed for too long. Second, in the process of getting the samples out of the bed for electron microscopy, the samples could be contaminated by particles resting near the sampling ports. Third, for the imaging, the sample had to undergo treatments, which could alter the original structure. As an alternative, Wang et al. (2006b) proposed laser-based planar imaging of agglomerates just above the bed surface. This will be discussed in the section "Determination of the agglomerate size". Fluidization of nanopowders using aeration alone Many nanopowders form large and compact agglomerates simply due to storage and are very difficult to fluidize because of the large cohesive forces between the particles, given their size and extensive surface area. Therefore, removing agglomerates larger than 500 μm will usually improve fluidization quality. Some nanopowders will fluidize smoothly at low superficial velocities with practically no bubbles, large bed expansion, and little elutriation. Other nanopowders require relatively high superficial velocities to be fluidized, and vigorous bubbling with significant elutriation is observed. To smoothly fluidize and process these types of nanopowders without considerable gas-bypassing, some sort of external assistance such as vibration or stirring is usually required. We will treat the various assistance methods later in this article; in this section, we will discuss gas fluidization of nanopowders without assistance methods. Chaouki et al. (1985) were one of the first to report the fluidization of aerogel (highly porous aggregates of primary particles a few nanometers in size). They showed that nanostructured Cu/Al2O3 aerogel fine particles can be smoothly fluidized at superficial velocities greatly in excess of the expected minimum fluidization velocity for such fine powders, because they form stable clusters or agglomerates. These agglomerates fluidized uniformly and expanded in a homogeneous manner, providing a means of dispersing and processing the very high specific surface area nanostructured aerogels. Morooka et al. (1988) were able to fluidize submicron (20–500 nm) Ni, Si3N4, SiC, Al2O3, and TiO2 particles at high gas velocities. The particles formed agglomerates and large gas bubbles were observed. Similarly, Pacek and Nienow (1990) were also able to fluidize ultrafine, very dense, hard metal powders (particle diameter 2–8 μm), which formed agglomerates. At higher gas velocities, the bed had two layers: a bottom layer with large agglomerates (up to 2 mm in diameter) and a top layer of smaller agglomerates, which fluidized smoothly. At even higher gas velocities, the entire bed was fluidized and the large agglomerates were broken up into smaller, more stable ones. They also reported that the bed behaved as if fluidizing Geldart group B powders—bubbling occurred at the minimum fluidization velocity (U mf), and bed expansion was low. Song et al. (2009) showed that adding coarser particles (e.g., FCC catalyst) to a fluidized bed of NPs improves the fluidization quality: it increased the bed expansion and reduced the elutriation. Wang et al. (2002) studied the fluidization of various fumed silica NPs. They showed that hydrophobic NPs expanded several times, from 2.5 up to 10 times their initial bed height and that hydrophilic NPs expanded only 1.5 up to 3 times their initial bed height. They also found relatively large minimum fluidization velocities for the hydrophilic NPs as compared to the hydrophobic particles. Wang et al. (2002) introduced the classification of the fluidization of nanopowders into "agglomerate particulate fluidization" (APF) and "agglomerate bubbling fluidization" (ABF); see Table 1 and the movies in the supplementary material. APF refers to smooth, liquid-like, bubble-less fluidization as previously observed when fluidizing aerogels (Chaouki et al. 1985). ABF refers to bubbling fluidization with very little bed expansion as previously observed by other researchers (Morooka et al. 1988; Pacek and Nienow 1990). ABF is observed not only for NPs, but also for other small particles of Geldart type C. APF is exclusively found for certain types of NPs and conditioned fine powders such as xerographic toners (Valverde and Castellanos 2007b). Wang et al. (2000) proposed to classify NPs exhibiting APF as E-particles, but this naming has never been adopted by other researchers. Table 1 Comparison of the fluidization behavior of APF and ABF (based on Wang et al. 2002) Esmaeili et al. (2008) studied the solids hold-up distribution of zirconia and alumina particles of 250 and 120 nm diameters, respectively. They reported ABF-type behavior and found using optical fibers and radioactive densitometry that both in radial and axial direction, the solids hold-up is quite constant. Only for alumina, a change in the axial direction was found: the solids hold-up increased when moving in the upward direction. Esmaeili et al. (2008) suggest that this is due to larger agglomerates leading to larger bubbles in the bottom zone. However, this does not seem logical given the fact that larger bubbles will rise faster and lead to a higher gas solids hold-up. Further research will be required to elucidate this topic. Wang et al. (2007b) state that NP fluidization does fit in the classical Geldart fluidization regime map, with A, B, C, and D powders (Geldart 1973). They report that agglomerates with typical properties (diameter of 220 μm and apparent density of 22 kg/m3) are close to the A/C boundary in the Geldart diagram: the ratio of the inter-agglomerate force to the buoyant weight of a single agglomerate is comparable to the same ratio for macro-sized particles at the AC boundary. This indicates why NPs sometimes show C-type behavior and other times show more A-type behavior (homogeneous fluidization). Valverde and Castellanos (2007b) used a different approach: they utilized the similarity between the fluidization behavior of beds of non-cohesive particles fluidized by liquids and the uniform behavior of gas-fluidized beds of conditioned fine powders (Valverde et al. 2003; Wang et al. 2002). They used empirical relationships for liquid-fluidization of larger particles and modified them to take into account the agglomeration in gas fluidization of cohesive particles (see also the section "Modeling of NP fluidized beds"). They distinguished two different states of homogeneous fluidization: solid-like fluidization in which the agglomerates are jammed and keeping their place (mostly similar to homogeneous fluidization of Geldart A particles) and liquid-like fluidization in which agglomerates freely move, but no macroscopic bubbles are formed. With increasing gas velocity, NPs are moving from the solid-like to the fluid-like fluidization state. With a further increase of the gas velocity, very light and small NP agglomerates will be elutriated, while in the case of larger and heavier NPs (roughly d p > 30 nm and ρ p > 3,000 kg/m3), they will move from fluid-like to bubbling fluidization. This corresponds to APF and ABF behaviors, respectively. Using this approach, they defined solid-like to fluid-like to elutriation (SFE) behavior and solid-like to fluid-like to bubbling (SFB) behavior. These two types of behavior would replace the classical Geldart type C behavior for the new type of fluidizable fine and ultrafine powders, which were unknown at the time the classical Geldart diagram was reported (see Fig. 4). Modified Geldart's diagram (Valverde and Castellanos 2007b) showing the boundaries between the types of fluidization expected for fine particles, including solid-to-fluid like to elutriation (SFE) behavior and solid-to-fluid like to bubbling (SFB) behavior. The thick gray line represents the boundary between A and C powders as shown in the original Geldart's diagram (Geldart 1973) Determination of the agglomerate size The formation of porous and light agglomerates is the key reason why NPs can be fluidized. To determine their fluidization characteristics, it is important to know the size of the agglomerates. Zhu et al. (2005) fluidized many different Evonik-Degussa Aerosil® and Aeroxide® metal oxide nanopowders (hydrophilic and hydrophobic silicas, alumina, and titania) as well as carbon blacks from Cabot Corp. conventionally (aeration alone). Some of these powders showed APF behavior, while others showed ABF-type behavior. They took images of the fluidized agglomerates at the interface between the bed and the freeboard (in the splash zone) with a CCD camera illuminated by a laser beam and used image analysis software to find the average agglomerate size. Zhu et al. (2005) also estimated the average agglomerate size from initial and final bed height measurements combined with the R–Z equation and obtained reasonably good agreement with the measured agglomerate sizes in the splash zone for APF-type nanopowders. For example, for Aerosil R974 (a hydrophobic silica showing APF behavior), the experimentally measured value of the agglomerate size was 315 μm as compared to 211 μm using the R–Z equation with n = 5.0. Wang et al. (2006a) measured the size of fluidized agglomerates of Evonik-Degussa fumed silica Aerosil R974 in the splash zone by using a high-resolution CCD camera and a planar laser sheet for illumination. Their experimental equipment and image analysis algorithm provided more accurate images of the fluidized nanoagglomerates than previous studies. They reported both a number or length-based average (N-L) and a volume-based average (S-V) agglomerate size. Both the measured N-L and the S-V average agglomerate size varied with gas velocity, with an S-V average size of 262 μm at 1.18 cm/s and 189 μm at 1.81 cm/s. Other investigators who also measured fluidized nanoagglomerate sizes in the splash zone include Valverde et al. (2008a) who studied the effect of using fluidizing gases of different viscosities and Hakim et al. (2005b) who fluidized NPs at reduced pressure (with vibration) to study the effect of low pressure on the minimum fluidization velocity. While visualizing agglomerates in the splash zone seems more reliable and better than ex-situ analysis of sampled agglomerates, it is questionable whether these agglomerates are truly representative for the average bed material. Hakim et al. (2005b) argues that the method is representative since no size segregation in the bed nor a change over time of the agglomerate size was observed. While the absence of size segregation might be the case for their specific situation, it has been observed by other researchers when fluidizing nanopowder. Moreover, the dynamic nature of agglomerates makes it very well conceivable that the size and/or weight will differ with height (Quintanilla et al. 2012). Gundogdu et al. (2007) determined the agglomerate size in the bed using X-ray microtomography; they were able to reach a spatial resolution of 400 nm. They applied this technique to fluidized beds of zinc oxide and copper oxide. They found an average agglomerate size of around 500 μm, but with a very large spread: it ranged from about 10 μm to 2 mm. Remarkably, they report an agglomerate porosity of around 50%, whereas most other authors report values as high as 98–99%. Recently, Quevedo and Pfeffer 2010 measured the size of fluidized agglomerates of both APF- and ABF-type nanopowders in-situ in conventional and assisted gas-fluidized beds using Lasentec focused beam reflectance method (FBRM) and particle vision measurement (PVM) probes. Both in-situ particle size distributions and agglomerate images of Aerosil R974 (APF type) and Aerosil 90 (ABF type) nanopowders were obtained. This was achieved by reducing the electrostatic charge in the fluidized bed by bubbling the gas through an alcohol–water solution before entering the bed. Failure to remove electrostatic charges resulted in blocking of the probe lenses and blurred images or spiky size distributions. The agglomerate size distributions showed that Aerosil R974 agglomerates are smaller and less dense than Aerosil 90 agglomerates. These observations match their respective fluidization behavior and confirm that the APF–ABF classification is dependent on both the size and density of the agglomerates. A comparison of FBRM volume weighted mean agglomerate size with that measured in the splash zone by different investigators for fluidization of Aerosil R974 is given in Table 2. Table 2 Comparison of FBRM volume weighted mean agglomerate size with that measured in the splash zone by different investigators for fluidization of Aerosil R974 (Quevedo and Pfeffer 2010) Fluidization of nanopowders using external assistance methods APF-type nanopowders are relatively easy to fluidize using aeration alone after very large and compact agglomerates (>500 μm) formed during storage are removed. To smoothly fluidize and process ABF-type nanopowders, some sort of external assistance is usually required; otherwise they show considerable gas-bypassing and significant elutriation of particles due to the required high fluidization velocity. Various assistance methods have been developed to enhance the fluidization of nanopowders. These methods include vibration, stirring, sound waves, pulsed flow, centrifugal fields, electric fields, and secondary gas flow from a microjet. Mechanical vibration Nam et al. (2004) applied vertical sinusoidal vibration (accelerations up to 5.5 times the gravitational acceleration and vibration frequencies from 30 to 200 Hz) to a fluidized bed of Aerosil R974, an APF-type nanopowder. They were able to decrease the mean agglomerate size (see Table 2), increase bed expansion, and reduce the minimum fluidization velocity. They estimated the fluidizing agglomerate size, density, external porosity, and terminal velocity using a novel method originally developed by Valverde et al. (2001a) for micron size particles that combined the fractal structure of the agglomerate and the R–Z equation. Nam et al. (2004) also studied the mixing characteristics of the vibro-fluidized bed; these results will be discussed in a later section on "Mixing of fluidized nanopowders." Levy and Celeste (2006) studied the effects of both mechanical and acoustic vibration on the fluidization of fumed silica Aerosil 200. By adding horizontal vibrations (frequency up to 9.5 Hz), they reduced the minimum fluidization velocity, which was further reduced when adding 80 Hz acoustic vibrations. Horizontal vibration-assisted fluidization of three different Evonik-Degussa silica NPs was also studied by others (Harris 2008; Zhang and Zhao 2010) using vibration frequencies from 0 to 34 Hz. They observed APF and ABF fluidization behaviors with the transition occurring at different frequencies for each type of particle. Smooth APF-type fluidization was observed for all particles at frequencies greater than 16.7 Hz, but fluidization could not obtained in the absence of external agitation for the three silica NPs which they studied. This may be because the authors did not sieve the nanopowders to remove the very large agglomerates that formed due to storage. Mechanical stirring Mechanical stirring of the fluidized bed is another way to improve fluidization of nanopowders. It can be carried out using a blade stirrer or using large magnetic particles. King et al. (2008) used a blade stirrer located in the bottom zone of the bed and report radial blending of the entire bed which prevents channeling. The blades sweep as close to the edges of the distributor plate as possible to minimize the opportunity for powder to collect along the base of the walls. According to King et al. (2008), radial stirring complements the axial flow of fluidizing gas and has shown to promote good fluidization behavior for cohesive and difficult to fluidize powders. Yu et al. (2005) used magnetic particles excited by an external oscillating magnetic field to stir the bed; see also Pfeffer et al. (2010). The magnetic particles were large (1–2 mm) and heavy (barium ferrite) and did not fluidize along with the nanopowder, but translated and rotated at the bottom of the column just above the gas distributor. The electromagnetic field was provided by coils located outside the column at the level of the distributor. They found that magnetic stirring enhanced the fluidization of nanoagglomerates quite significantly by breaking up clusters of agglomerates and by hindering the formation of bubbles. Yu et al. (2005) were able to smoothly fluidize, without bubbles, large clusters (>500 μm) of Aerosil R974 nanopowder. This nanopowder fluidizes smoothly (APF type) when sieved below 500 μm. However, large and more compact agglomerates, greater than 500 μm that formed during storage (from about 0.5–10 mm), could not be fluidized with aeration alone even at a gas superficial velocity as high as 13.2 cm/s. Figure 5, taken from Yu et al. (2005), shows the fluidization behavior (pressure drop and bed expansion) of the large (>500 μm) SiO2 NP agglomerates, with and without, magnetic excitation. Without magnetic assistance, visual observation showed that the smaller agglomerates were in motion at the top of the bed, but the larger agglomerates remained at the bottom of the bed, causing channeling of the gas flow. The bed showed almost no expansion, and the pressure drop was less than the bed weight, indicating that the entire bed was not fluidized. After turning on the external magnetic field, the large agglomerates became much smaller due to fragmentation (disruption of interparticle forces) caused by collisions with the magnetic particles, and these smaller agglomerates participated in the fluidization of the bed. After a few minutes, even at the relatively low gas velocity of 0.94 cm/s, all of the large agglomerates disappeared. The bed expanded slowly and uniformly, while the pressure drop became very close to the weight of the bed, indicating that the entire bed was fluidized. The magnetic particles were then removed, and the magnetically processed NP agglomerates were recharged back into the fluidization column, and a conventional fluidization experiment (no magnetic assistance) was performed. A very large reduction in the minimum fluidization velocity (U mf) from larger than 13.2 to 2.29 cm/s was observed, indicating that the average agglomerate size was significantly reduced. Bed expansion ratio and pressure drop for hard agglomerates with and without magnetic excitation. Solid lines the bed expansion ratios and dashed lines the pressure drops. Magnetic field intensity 140G at the center of the field, mass ratio of magnets to NPs 2:1, AC frequency 60 Hz (reprinted from Yu et al. (2005) with permission from Wiley). Umf1 minimum fluidization velocity without magnetic excitation; Umf2 minimum fluidization velocity with magnetic excitation Yu et al. (2005) also reported the average agglomerate size of sieved Aerosil R974 nanopowder less than 500 μm in size from images taken in the splash zone with and without magnetic assistance. Although the sieved nanopowder fluidized well without magnetic assistance, the difference in the measured average agglomerate size decreased from 315 to 195 μm when magnetic assistance was applied. Yu (2005) also fluidized primary NPs of carbon black pelletized to 800 μm (Cabot Black Pearls 2000) by this method. Neither fluffy carbon black NPs nor pelletized carbon black could be fluidized with aeration alone. He showed that without magnetic excitation, the minimum fluidization velocity is 27.6 cm/s, and this high gas velocity leads to large elutriation of carbon black particles and large gas-bypassing. When magnetic excitation is applied, the minimum fluidization velocity drops to 1.93 cm/s, and this much lower gas velocity prevents elutriation and significantly reduces bubbling and gas bypass. Also, the bed expansion increased from about 1.6 to about 5 or 6 times the original bed height and the surface of the bed appears uniform. Zeng et al. (2008) used a magnetically assisted fluidized bed similar to those described earlier (Yu 2005; Yu et al. 2005) to fluidize a mixture of APF-type SiO2 (20 nm) and ABF-type ZnO (20 nm) nanopowders. They found that this mixture can be fluidized stably and almost homogenously with the magnetic assistance, depending on the magnetic field intensity applied and the initial mixture content. Quevedo et al. (2007) studied the effect of using assistance methods such as vibration and/or moving magnetic particles on the humidification and drying of fluidized Aerosil 200 and Aerosil 90 nanopowders. Moisture was added to the fluidizing gas (nitrogen) by bubbling it through water, and the moisture level in the gas was monitored on-line using humidity sensors upstream and downstream of the fluidized bed. The amount of moisture adsorbed/desorbed by the powders was obtained by integration of the time-dependant moisture concentration. The experiments were run at temperatures above the dew point, to ensure the absence of liquid water and avoid the change of particle interaction by liquid bridging. It was found that when the bed of powder is assisted during fluidization, the mass transfer between the gas and the nanopowder is much larger than when the powder is conventionally fluidized. For Aerosil 200 (APF type), the presence of large agglomerates does not affect the amount of moisture retained by the fluidized bed since they are found in small amounts. For Aerosil 90 (ABF type), large agglomerates constitute a significant fraction of the powder and they affect the adsorption of moisture due to the poor mixing between the solid and gas phases, hindering the overall adsorption of moisture by the bed of powder. The enhancement of fluidization due to the assistance methods is reflected by the increase of moisture retained by the fluidized bed of powder during humidification and by the reduction of the time needed for the bed of powder to release the moisture trapped during drying. Vibration assistance was found to be more effective for Aerosil 200, but magnetic assistance was needed for Aerosil 90 in order to break-up the very large agglomerates formed in this ABF nanopowder. For Aerosil 90, a combination of vibration and magnetic assistance gave the best results. Zhu et al. (2004) used an external force field generated by sound in order to enhance the fluidization of APF-type Aerosil R974 fumed silica NPs. They placed a loudspeaker at the top of the bed. At sound frequencies of 50 or 100 Hz, they obtained a larger bed expansion and also a reduction in the minimum fluidization velocity. However, at frequencies greater than 200 Hz, they observed large ellipsoid-shaped bubbles which do not occur with aeration alone. Guo et al. (2006) also fluidized fumed silica NPs under the influence of an acoustic field. At frequencies below 200 Hz, they found results similar to those of Zhu et al. (2004). Liu et al. (2007) used sound-assisted fluidization of two kinds of SiO2 NPs (having primary sizes of 5–10 nm); one without surface modification and the other modified with an organic compound. The acoustic field (~100 dB and 50 Hz) reduced the minimum fluidization velocity for both NPs, but the untreated silica failed to fluidize as smoothly as the surface-modified silica. Different fluidization behavior, different bed expansion, and agglomerating behavior were also observed for the two kinds of NPs, which indicate that the surface properties of NPs have a significant influence on their fluidization behavior. Similar results were previously reported (Zhu et al. 2005) when comparing the fluidization behavior of hydrophilic and hydrophobic silicas without external assistance. Sound-assisted fluidization of silica and alumina nanopowders was also recently studied by Ammendola and Chirone (2010). As already reported by others above, they found the fluidization quality of both nanopowders to be poor without external assistance, even though some bed expansion was found. However, the application of acoustic fields of intensities above 135 dB and frequencies around 120 Hz increased the fluidization quality of both powders as indicated by ideal-like pressure drop curves, relatively high bed expansions, and the occurrence of a homogeneous regime of fluidization. A drawback of the use of sound waves produced by a loudspeaker placed at the top of the bed is that just the region close to the free surface can be excited, while larger and heavier agglomeration are mainly present at the bottom of the bed. Pulsed gas flow Rahman (2009) applied pulsations to the gas flow in a fluidized bed of different nanopowders; only part of the gas flow was oscillated (i.e., there is a constant base flow). She found that the fluidization quality is significantly improved compared to steady gas flow conditions: the solids motion was enhanced, channeling was prevented, and the minimum fluidization velocity decreased. Gas phase pulsation was found to be especially effective when fluidizing ABF-type nanopowders which tend to bubble as soon as minimum fluidization conditions are reached and shows very little bed expansion when fluidized conventionally. By applying pulsation assistance, bubbles bursting at the bed surface were greatly inhibited, and bed expansion was higher than for steady flow conditions. It was also found that the minimum fluidization velocity decreased when increasing the pulsation frequency. A disadvantage is that pulsation can lead to increased elutriation. On the other hand, gas pulsation can be used effectively to improve the quality of NP fluidization without adding any internals or foreign material to the bed, such as when using magnetic-assisted fluidization. Centrifugal field The use of a rotating fluidized bed to impose a centrifugal field on nanopowders has some distinct advantages over a conventional fluidized bed. The centrifugal force acting on the agglomerates allows fluidizing them at much higher gas velocities resulting in a much higher gas throughput per unit area of distributor, less entrainment of particles, and shallow beds resulting in very small bubbles and therefore very little gas-bypassing. Fumed silica, alumina, and titania nanopowders have been successfully fluidized in a rotating fluidized bed (Matsuda et al. 2004; Nakamura and Watano 2008; Quevedo et al. 2006). A smooth surface and appreciable bed expansion were obtained when using APF nanopowders, but ABF nanopowders such as Aeroxide titania P25 did not expand significantly due to bubbling. Nakamura and Watano (2008) showed that minimum fluidization velocity increases linearly with G 0 for different metal oxide nanopowders and is highest for Aeroxide titania P25 (ABF type). The fully expanded bed height is found to decrease with increasing G 0 for alumina and silica nanopowders, but was difficult to measure for the ABF-type titania due to bubbling. As shown in Fig. 6, the mean agglomerate size of Aerosil R974 NPs calculated using the fractal model suggested by Valverde et al. (2001a) is reduced by a factor of as much as four for high G 0 (40 times the acceleration of gravity) as compared to a conventional fluidized bed (G 0 = 1). As expected the agglomerate density (Fig. 7) in an RFB is larger than that in a conventional fluidized bed and is also larger than in vibration and magnetic-assisted fluidized beds. Agglomerate size of nano-particles as a function of centrifugal acceleration for a Richardson and Zaki exponent n = 5 (reprinted from Nakamura and Watano (2008) with permission from Elsevier) NP agglomerate density as a function of centrifugal acceleration. Error bars the differences with a change of n in a range of 4–6 (reprinted from Nakamura and Watano (2008) with permission from Elsevier) Matsuda et al. (2004) also studied the fluidization of a 7-nm primary particle size nanopowder in a rotating fluidized bed. They developed a model for predicting the agglomeration of NPs based on an energy balance between the energy required for disintegration of the agglomerates and the attainable energy for disintegration of the agglomerates. Experimentally, they found that the agglomerate size is reduced not only with increasing G 0 as reported by Nakamura and Watano (2008), but also with long-term operation of the fluidization. DC and AC electric fields Kashyap et al. (2008) studied the fluidization behavior of Tullanox 500 (an APF-type fumed silica nanopowder having a typical primary particle size with a diameter of 10 nm) in a rectangular fluidized bed with a DC electric field. Two copper sheets, acting as the two electrodes with opposite polarities, were attached to the parallel walls in the rectangular fluidized bed. Each electrode was connected to one of two high-voltage DC power supplies capable of producing up to 8 kV of DC voltage with opposite polarities, thus producing a maximum of 16 kV when connected to the electrodes. For the electrofluidization of Tullanox 500 NP agglomerates, the fluidized bed height was found to decrease rather than increase when the DC electric field was applied. Quintanilla et al. (2008) found similar results for DC electrofluidization of Aerosil R974. The Sevilla Powder Tester (SPT) (Quintanilla et al. 2008) was utilized as the fluidization setup, and two electrodes were placed on either side of the column and were connected to a DC high-voltage source. One of the electrodes was grounded and a high voltage (up to 30 kV) was applied to the opposite electrode using a high-voltage DC supply. The application of the electric field resulted in a decrease of the height of the bed. The decrease was not reversible. After turning off the electric field, the height of the bed further decreased or remained the same, rather than return to its previous height. The reason for the decrease in fluidization quality upon applying a DC electric field is that the NP agglomerates migrate toward the walls of the cell as seen by direct visualization using a high-speed camera (Valverde et al. 2008b). The charged nanoagglomerates feel a force F = Q·E, where Q is the charge on the agglomerates and E is the DC electric field strength. By this force, they are moved toward the walls of the fluidization column where they get irreversibly stuck. Thus, the fluidized bed appears to behave more like a spouted bed with most of the gas-bypassing through a central channel depleted of agglomerates, which results in the observed decrease in bed expansion. Quintanilla et al. (2008) also studied the expanded state of the fluidized bed under the combined effects of both vertical vibration and a DC electric field (provided by electrodes surrounding the bed). When the vibration was applied to the fluidized bed, the overall solid volume fraction ϕ decreased (i.e., the bed height increased), and the quality of fluidization improved as was previously observed (Harris 2008; Levy and Celeste 2006; Nam et al. 2004; Valverde et al. 2001a; Zhang and Zhao 2010). As the gas velocity was increased, the reduction in ϕ decreased implying that the vibration has less effect on the expanded state at high velocities (velocities much greater than the minimum fluidization velocity). Experiments performed at certain vibration frequencies also showed the formation of bubbles that propagated throughout the bed which curtailed bed expansion. The formation of bubbles occurred at different frequencies, depending on both the superficial gas velocity and effective vibrational force. By varying the strengths of the external fields (vibration and electric field), it was possible to achieve an equilibrium state, which matched the expanded state of the bed under no external effects. When only vibration was applied to the fluidized bed, the quality of fluidization improved. However, when a DC electric field was applied, the bed expansion decreased dramatically, probably due to electrophoretic deposition of the particles which made them stick to the wall of the column and not participate in the fluidization. Since the DC electric field actually decreased the NP fluidization quality, researchers recently studied the effect of applying an AC electric field (Lepek et al. 2010; Espin et al. 2009). In both studies, Aerosil R974 was used as the bed material. Espin et al. (2009) used a cylindrical column and applied a horizontal electric field (cross-flow). They found that the AC field works by agitating the charged agglomerates, for which an optimum frequency is needed in order to avoid electrophoretic deposition at the walls. This was observed at low frequencies, while at very high frequencies, agglomerates do not appear to be agitated and there is no observable effect of the field. Lepek et al. (2010) used a rectangular fluidization cell made of polycarbonate. They applied three different electric field spatial distributions (Fig. 8): a vertical field configuration (co-flow field), a horizontal electric field configuration (cross-flow field) which is the same configuration used in Quintanilla et al. (2008) for the DC electric field experiments, and a variable field configuration (non-uniform field). The latter used the two vertical electrodes for the cross-flow held at the same high voltage and grounding the metallic distributor plate at the bottom of the fluidization cell. In the non-uniform field configuration, the highest potential difference occurred in the region between the vertical electrodes and the distributor plate (Lepek et al. 2010). Thus, the largest induced electric field is applied in this region. On the other hand, the field between the vertical electrodes is negligible for a bed height of the order of the separation between the electrodes. All of the three different alternating electric fields configurations (co-flow, cross-flow, and variable) were found to enhance bed expansion. For the co-flow electric field, the polarity of the electrodes plays a major role in the expansion behavior with the top electrode grounded arrangement producing a higher bed expansion. In the cross-flow configuration, some bed expansion occurred, but at high velocities, some of the powder was elutriated. The most effective technique to assist fluidization was the application of the non-uniform alternating electric field (see Fig. 9), which was weak in the vicinity of the free surface but strong close to the bottom of the bed. Sketches of the three different setups used in the alternating electric field enhanced fluidization: a co-flow electric field, b cross-flow electric field, c variable electric field (reprinted from Lepek et al. (2010) with permission from Wiley) Snapshots of a fluidized bed of unsieved R974 silica before (left) and after (right) the electric field was applied (variable field configuration) (reprinted from Lepek et al. (2010) with permission from Wiley) Due to the wide size and weight distribution of the NP agglomerates—especially with unsieved nanopowder—a conventional fluidized bed is highly stratified: larger and heavier agglomerates will sink to the bottom of the bed, and smaller and light agglomerates will be suspended close to the free surface. These light agglomerates are easily elutriated if the gas flow is increased to mobilize the heavier agglomerates. The alternating non-uniform electric field strongly agitates the heavier agglomerates, which destabilizes the development of gas channels close to the distributor, thus enhancing fluidization. Furthermore, the variable field has almost no effect on the light agglomerates at the top of the bed, thus avoiding excessive elutriation. This arrangement's greatest advantage is helping to assist in the fluidization of unsieved nanopowder, which has a wide agglomerate size distribution range. Using this technique, the powder does not have to undergo a pre-treating sieving process, which has been critical to most previous fluidization studies of R974 silica. Secondary flow using microjets Secondary flows in the form of jets to fluidize micron-sized particles have been widely studied. Research has been done with jets pointing upwards, downwards, or horizontally, typically with nozzle sizes of the order of millimeters. These studies have shown that when properly designed and at high gas velocities, jets enhance fluidization by promoting turbulent mixing. Quevedo et al. (2010) and Pfeffer et al. (2008) have recently described a new method for enhancing the fluidization of agglomerates of NPs based on the use of microjets produced by micro-nozzles (diameters ranging from 127 to 508 μm) pointing downwards at close distance to the air distributor. Micro-nozzles pointing upwards also work, but there is some powder between the distributor and the nozzles that may not participate in the fluidization. In their experiments, nitrogen was used as the fluidizing gas. A low-pressure line was used to feed gas to the column through the distributor plate which is considered the primary flow, and a medium pressure line (about 8 bar) supplies gas to the micro-nozzle or nozzles and is the secondary flow. Part of the primary flow was bubbled through a tank containing a dilute ethanol−water solution which substantially reduces electrostatic effects in the fluidized bed caused by triboelectrification (Pfeffer and Quevedo 2011). The nanopowders used were different metal oxides (silicas, alumina, and titania) supplied by Evonik-Degussa. These powders were sieved to remove clusters of agglomerates larger than either 500 or 850 μm that formed during transportation and storage. According to Quevedo et al. (2010), the use of a micro-nozzle or multiple micro-nozzles as a secondary flow produced a microjet with sufficient velocity (hundreds of meters per second) and shear to break-up large nanoagglomerates, prevent channeling, curtail bubbling, and promote liquid-like fluidization. For example, Aerosil R974, an APF-type nanopowder, expanded up to 50 times its original bed height after the powder was processed by the microjet for about 20 min; without jet assistance, the maximum bed expansion was about 6 times (see Fig. 10). Comparison of the non-dimensional fluidized bed height as a function of gas velocity for conventional and microjet-assisted fluidization of Aerosil R974 (reprinted from Quevedo et al. (2010) with permission from Wiley) Microjet assistance also allows for the conversion of ABF-type behavior into APF-type behavior. Without microjet assistance, a maximum bed expansion of about 2.5 times the initial bed height is obtained for Aerosil 90, 1.75 for Aeroxide Alu C, and only 1.25 for Aeroxide TiO2 P25; the latter is one of the most difficult metal oxide nanopowders to fluidize. For these nanopowders, when the superficial gas velocity is increased above a certain value, i.e., the minimum bubbling velocity (U mb), the bed does not expand further and the bed height remains constant. As a result of applying the microjet(s), fluidized bed expansion of ABF nanopowders is increased 13–15 times for A90 and Alu C, and 5–6 times for TiO2 (see Fig. 11). The fluidization is much smoother and more homogeneous (APF-like), there is very little, if any, elutriation, and the onset of bubbling is also delayed due to the better dispersion of the powder in the gas phase. Microjet-assisted NP fluidization was also found to improve solids motion and prevent powder packing in an internal (Quevedo et al. 2010) and can be easily scaled-up by adding additional micro-nozzles. Images corresponding to the fluidization of Aeroxide TiO2 P25 in a 5-inch (12.7 cm) ID column. a Initial bed height, b maximum bed height when fluidized with microjet assistance, and c close-up of the fluidized bed surface. The fluidized bed expanded from 5.5 inches (14.0 cm) to 25.5 inches (64.8 cm), and the surface of the bed shows no bubbles (reprinted from Quevedo et al. (2010) with permission from Wiley) King et al. (2009) also used microjet-assisted NP fluidization in their atomic layer deposition (ALD) experiments in a glass fluidized bed reactor (FBR) at a pressure around 1 mbar and at temperatures between 100 and 500 °C. ALD is a gas-phase reactive process by which nanoscale functional layers can be chemically bonded to the surfaces of fine particles (see also the section "Applications and challenges"). Nozzle diameter, pressure, and relative flow rates were studied at a variety of conditions to optimize NP fluidization behavior in the presence of reactive precursors. In a new ALD study to coat ZnO onto TiO2 NPs, King et al. (2010) used a microjet-assisted FBR with isopropyl alcohol-based (instead of water) ALD to remove undesirable electrostatic effects as suggested by Pfeffer and Quevedo (2011). They also used a rotating tube suspended in the center of the reactor to which three micro-nozzles (two upward facing and one downward facing) were attached. This configuration, along with the alcohol-based ALD process, increased the dense phase to bubble phase ratio in the FBR to 89:11 from 55:45 using conventional water-based ALD. Mixing of fluidized nanopowders Some studies have been devoted to the mixing of fluidized nanopowders, both to the mixing of the agglomerates as well as the mixing inside agglomerates (i.e., exchanges of material between agglomerates). Nam et al. (2004) studied the mixing characteristics of the vibro-fluidized bed of NPs by dying some of their nanosilica blue to act as a tracer. They found very good mixing after 2 min of fluidization (the entire column of particles turned blue). Huang et al. (2008) studied the mixing of silica R972 by adding less that 5% phosphor particles with a diameter of 3.7 μm to the nanopowder. By mixing the materials well, composite agglomerates were formed, and the phosphor particles were used as tracers. By giving a light pulse and using a photosensitive detector, the mixing rate was determined. Huang et al. (2008) showed that the mixing rate was much lower than for a bed of FCC particles: both the radial and the axial dispersion coefficients were two orders of magnitude lower. Nam et al. (2004) also reported some preliminary testing with mixing of different materials (nano-silica with nano-titania and nano-molybdenum oxide) with SEM–EDX (scanning electron microscope–energy dispersive using X-ray analysis). They observed proper mixing of the agglomerates, but could not determine whether the agglomerates retained their integrity during fluidization or whether they broke and formed again rapidly. Hakim et al. (2005b) colored two batches of Aerosil OX-50 silica NPs with red and green dye, and put the two batches together with an uncolored (white) batch of the same material in a fluidized bed column. The powders were fluidized together for 1 h under mechanical vibration, and a sample of the resulting powder was analyzed under a light microscope. They observed agglomerates containing all three colors, indicating that the initial agglomerates broke apart and reformed into new complex agglomerates. This result offers qualitative evidence of the dynamic agglomeration of pre-existing NP agglomerates during fluidization, although Hakim et al. (2005b) did not report the scale of the mixing. Nakamura and Watano (2008) performed more detailed mixing studies of different NPs, nanosilica, and nanoalumina, in a rotating fluidized bed. They also obtained good mixing, but the mixing occurred at a scale of about 50 μm as shown in the SEM–EDX images (see Fig. 12). Apparently, parts of the agglomerates are exchanged, but the mixing did not take place down to the scale of individual NPs. This could partly be explained by the fact that the used NPs are produced by flame synthesis and might have formed sintered networks (also called sub-agglomerates), but such networks are typically not larger that 1 μm. Apparently, also Van der Waals forces and possibly capillary forces play a role (see the section "Forces between NPs") in keeping the sub-agglomerates together at a scale around 50 μm. Element mapping images of film surface of mixing sample (G 0 = 40; U 0/Umf = 1.5; SEM magnification was 1,000 times; mixing time was 6 min) (reprinted from Nakamura and Watano (2008) with permission from Elsevier) Ammendola and Chirone (2010) applied SEM–EDX analysis to samples of a sound-assisted NP fluidized bed of initially unmixed alumina and copper oxide. They concluded from color tracing that mixing of the agglomerates required just a few minutes, while mixing inside the agglomerates (e.g., exchange of material at the μm scale) required 80–150 min. Quevedo et al. (2010) performed NP fluidization experiments with alumina and iron oxide nanopowders, and studied powder samples using TEM–EELS (transmission electron microscopy–electron energy-loss spectroscopy). This enabled them to investigate the mixing behavior of the two nanopowders at the nanoscale. They found that for conventional fluidization mixing occurred only at the microscale; no mixing at the nanoscale took place. However, a powder sample after microjet processing was completely mixed and agglomerates had indeed exchanged individual NPs. This indicated that microjets can promote nanoscale mixing, while other assistance methods only seem to yield micro-scale mixing (i.e., exchange of sub-agglomerates). Modeling of NP fluidized beds The size of NP agglomerates A number of semi-empirical models can be found in the literature aimed to predict agglomerate size in NP fluidized beds. Chaouki et al. (1985) proposed that NP agglomerates in the fluidized bed are clusters of the fixed bed existing previous to fluidization. The size of the agglomerates can then be inferred from the balance between the attractive van der Waals force between particles and the agglomerate weight, which should be equal to the drag force on the agglomerate at minimum fluidization. Morooka et al. (1988) proposed an energy balance model for estimating agglomerate size, in which the energy generated by laminar shear plus the kinetic energy of the agglomerate was equated to the energy required to break the agglomerate. Iwadate and Horio (1998) presented a model to predict the agglomerate size in a bubbling bed. In their model, they postulated that the adhesive force between agglomerates was balanced by the expansion force caused by bubbles, yet this model cannot be applied to non-bubbling fluidization. Zhou and Li (1999) proposed an equation in which the joint action of the drag and collision forces is balanced by the gravitational and cohesive force. Nevertheless this approach is only valid at high Reynolds number (turbulent flow), while typical values of the Reynolds number around the agglomerate in fluidized beds of NPs are small (Zhu et al. 2005). Mawatari et al. (2003) wrote a force balance between the van der Waals attractive force between agglomerates and the separation forces, including gravity, drag force, and vibration if present. Matsuda et al. (2004) have proposed an energy balance equation based on the assumption that there exists an attainable energy for disintegration of agglomerates proportional to a power law of the effective acceleration. The exponent of this power law was fitted to experimental results. An inconvenience of these semi-empirical models for estimation of agglomerate sizes is that they require input of several experimental observations, which are unknown a priori. Data on the minimum fluidization gas velocity are needed in the Morooka et al. (1988) model. Bed porosity data are required in the equation derived from the models of Matsuda et al. (2004) and Mawatari et al. (2003), the latter one requiring also measurements of the minimum velocity for channel breakage. The relative agglomerate velocity appears in the predictive equation proposed by Zhou and Li (1999). Other fluidized bed data necessary in the above described models are bed void fraction, bubble size, particle pressure in the bubbling bed, and coordination number of the agglomerates at minimum fluidization. For a detailed review of these models, the interested reader may see the review by Yang (2005). Castellanos et al. (2005) presented a predictive equation to estimate agglomerate size originally derived to estimate the size of agglomerates of micron-sized particles in a fluidized bed. This equation was derived from a general model that considers the limit of mechanical stability of tenuous objects (Kantor and Witten 1984). In the fluidized state, micron-sized primary particles agglomerate due to the action of the interparticle attractive force F 0, which in most cases is due to van der Waals interaction (Castellanos 2005). The weight force of the agglomerate, which acts uniformly through the agglomerate body, is compensated by the hydrodynamic friction from the surrounding gas, which acts mainly at its surface due to the flow screening effect. As the agglomerate grows in size, the local shear force on a particle attached at the outer layer of the agglomerate was estimated as \( F_{\text{s}} \approx W_{\text{p}} \,k_{\text{a}}^{{(D_{\text{a}} + 2)}} \), where W p is the particle weight, k a is the ratio of the agglomerate size to particle size, and D a is the fractal dimension of the agglomerate (Castellanos et al. 2005). Particles would continue to adhere to the agglomerate as long as the interparticle attractive force F 0 is larger than F s. Thus, the balance F s = F 0 served to find an equation to predict the agglomerate size limit: $$ k_{\text{a}} \approx B{\text{o}}_{\text{g}}^{{\frac{1}{{D_{\text{a}} + 2}}}} $$ where Bog is the granular Bond number defined as the ratio of interparticle attractive force F 0 to particle weight W p. This model was later adapted by Valverde and Castellanos (2007a) to NP fluidization by considering NP simple agglomerates, which exist before fluidization, as effective particles undergoing agglomeration due to attractive forces between them in the NP fluidized bed. Thus Eq. 4 was adapted to calculate the complex agglomerate size d **: $$ d^{ * * } \approx d^{ * } \left( {\frac{F}{{W^{ * } }}} \right)^{{\frac{1}{{D_{\text{a}} + 2}}}} $$ where d * is the size of the simple agglomerates, F is the attractive force between these simple agglomerates, W * is their weight, and D a is the fractal dimension of the complex agglomerates. According to statistical analysis on TEM images (Sánchez-López and Fernández 2000) and other indirect measurements (Nam et al. 2004; Wang et al. 2006b), D a is close to 2.5. SEM images show that d * is, generally, of the order of tens of microns. A typical value of F is 10 nN when it is assumed that the main source of attraction between the simple agglomerates is the van der Waals interaction. This value may increase if particles are hydrophilic and the fluidized air is not dried, which leads to the formation of capillary bridges between the agglomerates (Valverde and Castellanos 2007a). W *can be calculated as \( W^{ * } = (d^{ * } /d_{\text{p}} )^{{D_{\text{a}} }} W_{\text{p}} \), where d p is the size of primary NPs and W p their weight. Results predicted from Eq. 5 yielded agglomerate sizes of the order of hundreds of microns. These results were compared with experimental data reported in the literature on a variety of conditions (particle size and density, particle surface hydrophobicity, use of fluidization assistance techniques, etc.). Good agreement was generally found (Valverde and Castellanos 2007a). Moreover, according to Eq. 5, the physical properties of the fluidizing gas, such as gas viscosity and density, should not affect agglomerate size. This was confirmed in a work in which the mean agglomerate size was measured directly from laser-based planar imaging and indirectly derived from bed expansion data for fluidization of titania and silica with nitrogen and neon (Valverde et al. 2008a). The role of effective acceleration on agglomerate size in the NP fluidized bed The effective acceleration g ef in the fluidized bed can be increased by means of a centrifugal fluidized bed setup. The increase of the effective acceleration g ef would cause an increase of the effective weight of the particles, which would decrease the Granular Bond number and therefore the size of the agglomerates according to Eq. 5. Matsuda et al. (2004) carried out an extensive series of centrifugal fluidized bed experiments on titania NPs. The agglomerate size was inferred from the fit of measurements of the minimum fluidization velocity to empirical correlations with the agglomerate Archimedes and Reynolds numbers. The results indicated a decrease of agglomerate size as g ef was increased, in good agreement with the values predicted by Eq. 5 (Valverde and Castellanos 2007a). Other techniques to change the effective acceleration field in a NP fluidized bed and, thus, to modify agglomerate size is to apply an external source of energy such as vibration (Quintanilla et al. 2008; Nam et al. 2004) or an alternating electric field (Lepek et al. 2010). In the case of vertical vibration, the root-mean-squared effective acceleration is increased up to g ef~g Λ (Valverde and Castellanos 2006a), where $$ \Uplambda = 1 + \frac{{A\omega^{2} }}{{g_{{}} }} $$ here A is the vibration amplitude, ω = 2πf, where f is the vibration frequency, and g = 9.81 m/s2 is the gravitational acceleration. The consequent decrease of agglomerate size according to Eq. 5, with W* multiplied by Λ, would then explain the increase of fluidized bed expansion observed experimentally (Nam et al. 2004; Quintanilla et al. 2008; Valverde and Castellanos 2008). The effective acceleration can be also increased by means of application of an alternating electric field to the fluidized bed. Since NP agglomerates are generally charged due to triboelectric charging, an externally applied oscillating electric field agitates the agglomerates in a non-invasive way. This gives rise to an additional shear force in order to balance the electrical force on the agglomerates. In the case of a horizontal electric field, the root mean square effective acceleration would be increased by a factor (Espin et al. 2009): $$ \Uplambda = \sqrt {1 + \left( {\frac{{Q^{ * * } E_{\text{rms}} }}{{W^{ * * } }}} \right)^{2} } $$ where Q ** and W ** are the electrical charge and weight, respectively, of the complex agglomerates, and E rms is the root-mean-square of the alternating electric field strength. Again, the predicted decrease of agglomerate size according to Eq. 5 would explain the increase in bed expansion observed for NP fluidized beds excited by alternating electric fields (Espin et al. 2009). Nevertheless, the possible influence of the increased drag on particles oscillating with respect to the surrounding fluid, which is well known to occur in liquid suspensions, should be also addressed in future investigations (Chan et al. 1972). A relevant result also predicted by Eq. 5, but to our knowledge unobserved experimentally, is that the agglomerate size increases as the effective acceleration is decreased. Accordingly, gas fluidization of NPs at microgravity conditions would lead to the formation of extremely porous beds as seen in liquid suspensions, where agglomerate size is limited by thermal agitation. A modified R–Z equation for NP fluidized bed expansion The R–Z phenomenological equation is widely accepted to correlate the superficial fluidizing velocity v f and the particle volume fraction ϕ of uniformly fluidized beds (Richardson and Zaki 1954): $$ \frac{{v_{\text{f}} }}{{v_{\text{p0}} }} = \left( {1 - \phi } \right)^{n} $$ v p0 is the Stokes settling velocity of a single particle at low particle Reynolds number $$ v_{\text{p0}} = \frac{1}{18}\frac{{\left( {\rho_{\text{p}} - \rho_{\text{f}} } \right){\kern 1pt} {\kern 1pt} g{\kern 1pt} d_{\text{p}}^{2} }}{\mu } $$ where ρ p is the particle density, ρ f is the fluid density, d p is the particle size, and μ is the viscosity of the fluid. The exponent n in Eq. 8 is an empirical parameter. Richardson and Zaki (1954) reported in their pioneer experimental work n = 4.65 in the small particle Reynolds number (Re t ) regime, while n decreased as Re t increased. A theoretical derivation by Batchelor and Wen (1982) for Re t < 0.1 using a renormalization method led to the equation v f/v p0 ≈ 1 − 5.6ϕ, which conforms to the dilute limit of the R–Z equation for n = 5.6. Originally, the R–Z equation was derived for fluidization of noncohesive coarse beads (of size d p > ~50 μm) fluidized by liquids, which normally exhibit uniform fluidization. It has been shown that a modified version is also a useful correlation for uniform gas-fluidized beds of agglomerated fine and ultrafine particles (Nam et al. 2004; Valverde et al. 2001b). In this case, particle agglomeration changes the internal flow length scale, which turns out to be determined by the agglomerate size instead of the individual particle size. Thus, in the case of NP fluidized beds, the velocity scale in the R–Z equation for fluidized beds of agglomerates should be changed to the terminal settling velocity of the fluidizing units v **, namely the agglomerates. According to this argument, Wang et al. (2002) fitted their experimental data on NP fluidized beds to the modified equation $$ \frac{{v_{\text{g}} }}{{v^{ * * } }} = \left( {1 - \phi } \right)^{n} $$ By considering v **and n as fitting parameters, writing \( v^{ * * } = (1/18)\rho^{ * * } gd^{ * * } /\mu \), and assuming that the agglomerate density ρ ** could be approximated by the powder bulk density ρ b, Wang et al. inferred the agglomerated sizes in fluidized beds of several nanopowders. A similar approach was adopted by Jung and Gidaspow (2002), who used the agglomerate size obtained in this way as an input to an elaborate simulation aimed to describe the sedimentation of the bed. Since n was considered as a fitting parameter, Wang et al. (2002) obtained values of n as low as 3, which should correspond to turbulent conditions (Richardson and Zaki 1954), yet the Reynolds number in fluidized beds of NPs is typically smaller than 0.1 (Zhu et al. 2005). It may be argued that, since NP fluidized beds operate in the low Reynolds number regime, the R–Z exponent cannot be used as a free fitting parameter, but instead it must be fixed to a value around n ≈ 5 corresponding to the viscous limit (Batchelor and Wen 1982). Equation 10 has been further improved in order to take into account the effective screening of the gas flow by the agglomerates. Valverde et al. (2001b) assumed that agglomerates are approximately spherical and that the agglomerate hydrodynamic radius can be approximated to its gyration radius. As estimated by Zhu et al. (2005), the error in assuming that NP agglomerates behave as impermeable particles for the purposes of hydrodynamic analysis is small. Thus the agglomerate volume fraction ϕ ** was used instead of the particle volume fraction ϕ in this modified approach: $$ \frac{{v_{\text{s}} }}{{v^{**} }} = \left( {1 - \phi^{**} } \right)^{n} $$ ϕ ** being the volume fraction of the complex agglomerates in the NP fluidized bed. It is worth reminding that the agglomerates observed in NP fluidized beds may show an intricate hierarchical structure (Wang et al. 2002), wherein individual NPs first linking into a three-dimensional netlike structure (sub-agglomerates), which then coalesce into the simple agglomerates. According to Wang et al. (2002), these simple agglomerates aggregate into complex agglomerates when the bed is fluidized. Taking into account this multi-stage agglomerate structure (see the section "The fractal morphology of NP agglomerates"), Eq. 11 has been rewritten as (Valverde and Castellanos 2006b): $$ \frac{{v_{\text{g}} }}{{v_{\text{p0}} }} = \frac{{N_{ 0} }}{{k_{ 0} }}\frac{N}{{k^{{}} }}\frac{{N^{*} }}{{k^{*} }}\left( {1 - \frac{{k_{ 0}^{3} }}{{N_{ 0} }}\frac{{k^{3} }}{N}\frac{{(k^{ * } )^{3} }}{{N^{ * } }}\phi } \right)^{n} $$ where N 0 is the number of individual NPs aggregated in the so-called sub-agglomerates of size d 0 and k 0 = d 0 /d p is the relative size of these sub-agglomerates (related by a fractal dimension D 0 = ln N 0 / ln k 0). N is the number of sub-agglomerates aggregated in the so-called simple agglomerates of size d * and k = d * /d 0 is the relative size of these simple agglomerates (related by a fractal dimension D = ln N/ ln k). Finally, N * is the number of simple agglomerates (existing before fluidization) that aggregate in the fluidized bed to form the so-called complex agglomerates of size d ** and k * = d ** /d * is the relative size of these complex agglomerates (related by a fractal dimension D * = ln N */ln k *). Likewise, the predictive equation to estimate agglomerate size (Eq. 4) can be further elaborated to take into account this multi-step agglomeration process (Valverde and Castellanos 2008). Equation 11 allows us to incorporate in the model any additional knowledge about the multiple agglomeration steps that originate the complex agglomerates. It might well happen that the fractal dimension of the simple agglomerates D = ln N/ln k is not the same as the fractal dimension of the complex agglomerates D * = ln N */ln k *, or the fractal dimension of the sub-agglomerates D 0 = ln N 0/ln k 0. That will depend on the agglomeration mechanism of NPs in the nanopowder synthesis process. In that case, the global fractal dimension D a = ln N a/ln k a of the complex agglomerate, where N a = N * N N 0 and k a = k * k k 0 , would not be well defined. By assuming that the global fractal dimension definition is valid (D 0 = D = D * = D a), Eq. 12 can be rewritten as $$ \frac{{v_{\text{g}} }}{{v_{\text{p0}} }} = k_{\text{a}}^{{D_{\text{a}} - 1}} \left( {1 - k_{\text{a}}^{{3 - D_{\text{a}} }} \phi } \right)^{n} $$ where \( k_{\text{a}} = d^{ * * } /d_{\text{p}} \) . Equation 13 has been employed to estimate the agglomerate size by fitting it to experimental results on bed expansion and sedimentation, yielding results in good agreement with direct observations by means of laser-based planar imaging (Nam et al. 2004; Valverde and Castellanos 2007a; Zhu et al. 2005; Wang et al. 2006a). In close analogy with gas-fluidized beds of micron-sized particles, the fractal dimension D a of the complex agglomerates obtained from fitting turns to be close to 2.5. An increase of this value is observed when the quality of fluidization decreases. This indicates that there is a correlation between the higher density of agglomerates (higher values of D a) and the worsening of fluidization quality. The size of gas bubbles in NP fluidized beds Having an estimation of the maximum size of stable gas bubbles (D b) in NP fluidized beds will give us an idea of the type of fluidization expected. Using a criterion originally derived by Harrison et al. (1961), it has been hypothesized that gas bubbles in NP fluidized beds are no longer stable if their rising velocity exceeds the terminal settling velocity of the complex agglomerates (Valverde et al. 2008a), which leads to the simple equation $$ \frac{{D_{\text{b}} }}{{d^{**} }} \approx \frac{1}{160}\frac{{\rho_{\text{p}}^{3} {\kern 1pt} g{\kern 1pt} d_{\text{p}}^{3} }}{{\mu^{2} }}{\kern 1pt} \,k_{\text{a}}^{{2D_{\text{a}} - 3}} $$ for the ratio of maximum bubble size D b to complex -agglomerate size in NP fluidized beds. Here k a can be calculated from Eq. 4 and it may be assumed \( D_{\text{a}} \approx 2.5 \). Following the original criterion by Harrison et al. (1961), this ratio is directly correlated to the type of fluidization to be expected. Thus, if \( D_{\text{b}} /d^{ * * } < 1 \), the powder would exhibit APF fluidization behavior, characterized by large bed expansion and the absence of visible gas bubbles. On the other hand, a value \( D_{\text{b}} /d^{ * * } > 10 \) means that stable gas bubbles of macroscopic size are likely to be developed. In this case, ABF behavior, characterized by poor expansion and presence of large bubbles, is to be expected. For intermediate cases, a transition from APF to ABF behavior would occur as the gas velocity is increased. Using Eq. 14, it was estimated, for example, \( D_{\text{b}} /d^{ * * } \approx 0.4 \) for fluidization of R974 silica nanopowder (Valverde and Castellanos 2007a), which led to predict full suppression of bubbles for these nanopowder as experimentally observed (Zhu et al. 2005). On the other hand, it was estimated \( D_{\text{b}} /d^{ * * } \approx 3.4 \) for titania P25 nanopowder (Valverde and Castellanos 2007a) which predicts, for these nanopowders, a transition to bubbling fluidization as the gas velocity is increased, in agreement with experimental observations (Zhu et al. 2005). The use of Eq. 14, along with a modified Wallis criterion to predict the onset of bubbling instability for fluidized agglomerates, allowed for the construction of the modified Geldart's diagram shown in Fig. 4 (Valverde and Castellanos 2007b). In the case of fluidization of nanopowders, particle size and density must be interpreted in this diagram as the size and density of the simple agglomerates existing before fluidization, which behave as effective particles when fluidized and agglomerate to form complex agglomerates. The typical density and size of these simple agglomerates for silica nanopowder are 50 kg/m3 and 30 μm, respectively (Valverde et al. 2008a; Zhu et al. 2005), which according to Fig. 4 would give SFE behavior (or APF in different terminology) in agreement with experimental observations. Titania nanopowders have denser simple agglomerates (density above 100 kg/m3), which would shift the fluidization behavior of this nanopowder to SFB (or ABF in different terminology) as seen experimentally (Valverde and Castellanos 2007b; Zhu et al. 2005). Computational fluid dynamics modeling of NP fluidization Computational fluid dynamics (CFD) is routinely applied in industry to help engineering design and has also become a relevant subject of research in multiphase systems, including fluidization. Reliable simulation tools can provide valuable insights into particle flow processes and, as a result, accelerate the achievement of substantial process improvements. The challenge in modeling particulate processes lies in understanding the wide range of physical length and time scales. In order to justify a CFD study of NP fluidized beds, it is particularly relevant to begin with a proper formulation of the averaged equations and closure relations. Thus, a fundamental problem is to write down the equations that are to be solved, especially when the size of agglomerates is a dynamic variable. Usually the closure relations when formulating the basic fluid mechanics equations of fluidized beds are formulated on the basis of rough assumptions since the interpretation of empirical data from engineering studies is difficult. A valuable contribution for the success of CFD simulations of NP fluidized beds would be thus experimental results obtained either at macroscopic, mesoscopic, or microscopic scales. A main difficulty of CFD studies is that the fluidizing units in NP fluidized beds (i.e., the complex agglomerates) are continuously undergoing a dynamic process of formation and disruption. In spite of this fundamental difficulty, some attempts have been performed to interpret experimental results on NP fluidization by means of CFD. In these works, this problem is typically circumvented by assuming a fixed agglomerate size and density, to be inferred from experimental measurements. Jung and Gidaspow (2002) simulated the settling of a NP fluidized bed using an Eulerian−Eulerian (two-fluid) hydrodynamic model. The input into the model was a measured solids stress modulus and an agglomerate size determined from the settling curves. An interesting conclusion from their work was that the simulation results predicted nonbubbling fluidization for the NP agglomerates, while the same CFD code predicted bubbling for Geldart B particles as observed experimentally. Furthermore, the simulation results were in close agreement with the observed sedimentation velocity in the NP fluidized bed when the gas flow supply was turned off. Wang et al. (2007a) worked on a two-fluid model based also on the solid stress modulus model developed by (Jung and Gidaspow 2002) and a drag force model proposed by Wang et al. (2002). Averaged solids concentration and particle velocity distributions were computed showing a circulation pattern of the NP agglomerates in a nonbubbling fluidized bed. An interesting result of the simulations was the stratification of solids concentration, with the highest solids concentration in the bottom of the bed. The simulation results showed reasonable agreement with experimental results reported by Jung and Gidaspow (2002). Huilin et al. (2010) used an Eulerian−Eulerian model, combined with an agglomerate-based approach. As proposed by Van Wachem and Sasic (2008), the agglomerate properties used in the simulations are estimated from a force balance, taking into account drag, collision, gravity, and Van der Waals interaction. Huilin et al. (2010) show that this leads to agglomerate sizes that are in good agreement with experimental findings. An alternative approach to the Euler−Euler simulations is Euler–Langrange simulations. In CFD models of the latter type, the gas phase is treated as continuous and the particles are modeled individually by a discrete element model (DEM). In the case of NPs, the discrete elements are the agglomerates rather than the individual NPs (Wang et al. 2008). The agglomerate motion is calculated by integrating Newton's law of motion and the fluid is modeled by approximating the Navier−Stokes equations in a finite volume discretized framework. Agglomerate–agglomerate interactions are calculated using the soft-sphere approach, which enables for multiple collisions occurring frequently in a dense fluidized bed. In this approach, it is assumed that when the spheres collide, they deform elastically and suffer a repulsive force of strength proportional to the magnitude of the overlap. To prevent excessively large computational times, these simulations are limited to 2D (Wang et al. 2008) or pseudo 2D (van Ommen et al. 2010a) geometries. These simulations assume a constant agglomerate size (i.e., agglomerate breakage is not considered). Wang et al. (2008) showed by simulations that the stability analysis of Foscolo and Gibilaro (1987)—originally developed for conventional particles—is useful for predicting the transition from particulate to bubbling fluidization. van Ommen et al. (2010a) studied the high-velocity microjet technique for enhancing nanopowder fluidization. Their simulations suggested that the main cause for agglomerate size reduction and bed height increase found in microjet experiments is not the shear on the agglomerates, but rather agglomerate–agglomerate collisions: these give much larger forces on the agglomerates in the simulations. As said above, a central problem of the current state of the art in CFD modeling on NP fluidization is that agglomerates have to be treated as rigid spheres of fixed size and density. Since complex agglomerates are formed during fluidization, experimental data have to be an input for carrying out the simulations. Fully predictive simulations to be performed in future works should allow for agglomerate size to be an output of the simulation results. A possible strategy would be to incorporate Eqs. 3 and 4 into the models. In the case that the bed is externally excited, an effective acceleration can be incorporated into the model as it has been described in the cases of vibration and AC electric field (Eqs. 5 and 6). The input parameters would be in this way primary parameters known a priori such as simple agglomerate size (to be measured by means of SEM), particle density and size, and interparticle attractive force. This approach would be useful for evaluating the effect of external fields used to assist fluidization thus helping to optimize their application in practical situations. A remaining issue would be to properly model the collisions between agglomerates that may lead to agglomerate breakage as inferred from the work of van Ommen et al. (2010a) in the case of the microjet assistance technique. Applications and challenges Currently, fluidization of nanopowders is only applied in a limited number of commercial processes. The two most important large-scale processes involving fluidization of nanopowders are the production of fumed metal oxides and carbon black (Flesch et al. 2008; Voll and Kleinschmit 2000). Fumed metal oxides are nanopowders which are industrially produced in flame reactors at high temperature. In the case of fumed silica, a chlorosilane vapor (SiCl4) is mixed with air and hydrogen, and hydrolysis takes place well above 1,000 °C. Fumed silica is used in the silicone industry to provide the desired rheology and mechanical strength in silicone adhesives and silicone rubbers, and as filler in paints, coatings, printing inks, adhesives, and unsaturated polyester resins. Fumed alumina is used to treat ink jet paper for improved ink absorbance, and fumed titania is used in cosmetic applications such as sunscreens. In the manufacture of fumed metal oxides, fluidized beds are extensively used to remove the byproduct HCl from the fumed oxides (deacidification), or for chemical modification of the surface groups, for example, to make hydrophilic fumed silica hydrophobic (Flesch et al. 2008). Oxygen-containing groups on the surface of carbon black particles strongly influence their properties, such as vulcanization rate, flow characteristics, and color. Oxidative after treatment of carbon black in a fluidized 1633 bed system can be used to tune these properties (Voll and Kleinschmit 2000). However, it is anticipated that in the near future, NPs will be applied much more broadly. It will be crucial to scale-up production processes while precisely maintaining the specifications of the particulate product. We expect that fluidization can play an important role in both the production and application of NPs, as it can be used for operation such as reaction, coating, granulation, mixing, drying, and adsorption. Currently, NPs are already applied in, for example, chemical–mechanical polishing, in powder flow enhancement, in catalysis, and in medicine. In most of these applications, fluidization does not play (yet) a large role. NPs are used for chemical–mechanical polishing in the fabrication of semiconductor chips to prevent microscratching (Singh et al. 2002; Yang 2005). NPs are also used as a flowing aid for larger particles: coating cohesive micron-sized particles with NPs can significantly increase the flowability of cohesive powders (Yang et al. 2005; Linsenbühler and Wirth 2002; van Ommen et al. 2010b; Valverde et al. 1998). Most heterogeneous catalysts consist of nanosized particles dispersed on a high surface area support. However, most catalysts of industrial importance have been developed by trial-and-error experimentation (Jacobsen et al. 2001). A better scientific basis could make catalyst development substantially more efficient. For example, advances in characterization methods have led to a better understanding of the relationships between NP properties and catalytic performance (Bell 2003). NPs play an increasing role in medicine, both for imaging or for transporting and delivering therapeutic agents (Jain 2007; Medina et al. 2007). Coating of nanosized drug particles with certain biodegradable polymers will allow controlled release, protect it from stomach acids, and prevent it from becoming trapped in a mucus barrier so it can be targeted to specific organs of the body (Lai et al. 2008), and prevent immune cells (macrophages) from engulfing and eliminating the nanosized drug particles circulating in the bloodstream. The application of NPs also offers new possibilities toward the development of personalized medicine (Riehemann et al. 2009). A potential use of NPs is in enhanced calcium-based sorbents for CO2-capture (Li et al. 2010; Lu et al. 2009). Alternatively, silica nanopowder can be mixed with calcium hydroxide fine powder to enhance the efficiency of CO2 adsorption by improving the gas–solids contact efficiency in a fluidized bed (Valverde et al. 2011). In this case, uniformly fluidizable agglomerates of silica NPs serve as carriers of Geldart C particles with high CO2 adsorption capacity. In several applications, core−shell NPs exhibit superior physical and chemical properties compared to their single-component counterparts (Zhong and Maye 2001; Caruso 2001); fluidization can play an important role in making such particles. The combination of two or more materials gives additional degrees of freedom in the creation of NPs and consequently an enormous amount of potential particle structures. Up to now, most attention in literature is aimed at liquid-phase methods for synthesizing core−shell NPs. These methods typically yield only small amounts of material: they are cumbersome to scale up. Moreover, such recipes are often very specific for just one type of core−shell NP. Gas phase methods can more easily produce larger amounts of material and are typically more generic (Strobel and Pratsinis 2007; Ullmann et al. 2002). A successful technique to make nanostructured particles of various compositions in the gas phase is flame spray pyrolysis (Dosev et al. 2007; Kim and Laine 2009; Teleki et al. 2008a). An advantage of this method is that NP production and coating are carried out in a single step; a disadvantage is that rather wide particle size distributions are obtained. An alternative is to separate the synthesis of core and shell into two subsequent steps. There are several techniques available to coat NPs in a fluidized bed process. These techniques will be discussed below. A common technique for gas-phase coating objects with a closed layer is chemical vapor deposition (CVD). In a typical CVD process, the substrate is exposed to one or more gaseous precursors, which react on a surface to produce the desired film. CVD is commonly used in the semiconductor industry, but can also be used to produce coated particles, e.g., noble metal catalyst particles and layered luminescent pigments (Czok and Werther 2006). However, CVD is less suited to coat NPs. Since different chemical reactants coexist in the gas phase during the CVD reaction, homogeneous reactions can take place that form NPs contaminating the product. Moreover, truly uniform and conformal films on individual NPs have not been achieved (Hakim et al. 2005a). Instead of CVD, ALD can provide particles with an ultra-thin, uniform layer. This technique is different from CVD in that the chemistry is split into two half-reactions: the different reactant gases are fed to the sample consecutively rather than simultaneously. For example, for an alumina coating process, a precursor such as tri-methyl-aluminum binding to the surface by chemisorption in step (A) reacts with an oxidizer such as water in step (B). A simplified version of the reaction scheme is (Puurunen 2005): $$ \begin{gathered} ({\text{A}}) \, \left\| {{\text{Al}}{-}{\text{OH}}} \right. \, + {\text{ Al}}({\text{CH}}_{3} )_{3} \, ({\text{g}}) \, \longrightarrow \, \left\| {{\text{Al}}{-}{\text{O}}{-}{\text{Al}}({\text{CH}}_{3} )_{2 \, } } \right. + {\text{ CH}}_{4} \, ({\text{g}}) \hfill \\ ({\text{B}}) \, \left\| {{\text{Al}}{-}{\text{CH}}_{3} } \right. \, + {\text{ H}}_{2} {\text{O }}({\text{g}}) \, \longrightarrow \, \left\| {{\text{Al}}{-}{\text{OH}}} \right. \, + {\text{ CH}}_{4} \, ({\text{g}}) \hfill \\ \end{gathered} $$ where ║ denotes the solid surface. The number of times the (A)–(B) cycle is repeated determines the thickness of the coating, resulting in full control over the layer thickness at the atomic level. ALD can be applied to a wide range of particles sizes (~10 nm–500 μm) and materials. Weimer and co-workers (Ferguson et al. 2000; Hakim et al. 2005a) showed that applying ALD to particles is best carried out when these particles are fluidized. In the semi-conductor industry, ALD is typically carried out under vacuum to enhance the removal of non-reacted precursors and gaseous by-products. Typically Weimer and co-workers apply ALD to particles at low pressure, ~100 Pa. However, Beetstra et al. 2009 showed that ALD of fluidized particles can also be carried out at atmospheric pressure (see Fig. 13), which simplifies the fluidization of the particles and facilitates process scale up. TEM picture of a LiMn2O4 particle coated with a thin layer of alumina (five ALD cycles) at atmospheric pressure. Such NPs can be used as cathode material in Li-ion batteries (reprinted from van Ommen et al. (2010b) with permission from Elsevier) Molecular layer deposition is a technique related to ALD; with this coating technique organic layers instead of inorganic layers are deposited (Liang et al. 2009). Several authors have been using plasma-enhanced CVD to provide micron sized and NPs with a very thin layer (Jung et al. 2004; Sanchez et al. 2001; Spillmann et al. 2006; Abadjieva et al. 2011), although only Spillmann et al. (2006) coated NPs. Esmaeili et al. (2009) used a fluidized bed reactor for encapsulating NPs by few nm of polyethylene using Ziegler−Natta catalysts. We anticipate that in the coming years, NPs will find more and more applications in medicine, catalysis, and energy processes. In some cases simple, single-material NPs can be applied, but several applications ask for more complex, nanostructured particles such as core−shell particles. It will be crucial to scale-up production processes while precisely maintaining the specifications of the particulate product. We believe that fluidization of NPs will play an important role in this. A strong interplay between different disciplines, including physical chemistry, material sciences, reaction engineering, and fluid mechanics, is essential for reaching important breakthroughs in the manufacturing and processing of NPs. Proper fluidization of NPs is often not possible without an assistance method. As discussed earlier, we think that the use of microjets is the most promising approach. However, the exact working mechanism of these microjets is not yet fully understood. Also some of the other assistance methods, such as the use of acoustic waves, need further research to fully understand and optimize them. Another virtually unexplored field is the modeling of reactions involving fluidized nanopowders. Given the large range of length scales that play a role—one NP agglomerate easily consists of billions of particles—a multi-scale modeling approach will be needed. The increased use of NPs will also require more attention for the safe and sustainable use of these materials. Although humans have been exposed to airborne NPs throughout their evolutionary stages, such exposure has increased dramatically over the last century due to anthropogenic sources such as combustion processes. The increasing use of engineered nanomaterials is likely to become yet another source through inhalation, ingestion, skin uptake, and injection of engineered nanomaterials, requiring more information about safety and potential hazards of NPs (Oberdörster et al. 2005). According to Nel et al. (2006), a proactive approach is required in safety evaluations, and the regulatory decisions should follow from there. In addition to facilitating the safe manufacture and implementation of engineered nanoproducts, these authors foresee also potential positive spin-offs of the understanding of nanotoxicity. For instance, the propensity of some NPs to target mitochondria and initiate programmed cell death could be used as a new cancer chemotherapy principle. Auffan et al. (2009) conclude on basis of a literature study that "larger" NPs (30–100 nm) show merely the same behavior as bulk materials, while NPs smaller than 30 nm have unique properties that require specific regulations. Fluidization can be used to process large quantities of nanopowders in the gas phase. The NPs are not fluidized as individual particles, but as agglomerates. Because of interparticle forces such as van der Waals forces and capillary forces, agglomerates are formed, which are very dilute and have a fractal nature. The agglomerates are typically a few hundred μm in size and have a voidage of about 0.9–0.99. Regular fluidization of these nanopowders can lead to two different types of fluidization: APF (agglomerate particulate fluidization) and ABF (agglomerate bubbling fluidization). APF is smooth, liquid-like, bubble-less fluidization that is only observed for certain types of NPs and aerogels. ABF is bubbling fluidization with very little bed expansion, as also observed for other small particles of Geldart type C. To enhance the fluidization of nanopowders—especially those of the ABF type—various assistance techniques can be used: mechanical vibration, mechanical stirring, sound waves, pulsed gas flow, a centrifugal field (rotating fluidized bed), alternating electric field, or secondary gas injection using microjets. The techniques typically lead to mixing at the micron-scale: parts of agglomerates are exchanged. Only the use of microjets has been shown to lead to mixing of individual NPs, but more research needs to be done to verify this observation. Several approaches have been applied to model the behavior of fluidized nanopowders. A force balance can be used to calculate the average size of NP agglomerates in a fluidized bed, also when additional external forces (e.g., due to vibration) are exerted. An alternative is to use a modified Richardson and Zaki equation to estimate the agglomerate size. Some first attempts have been made to apply CFD, either using an Eulerian−Eulerian approach requiring specific closures to describe the agglomerates as a continuous phase, or by discrete element modeling in which the individual agglomerates are modeled. The application of nanopowder fluidization in practice is still limited, but a wide range of potential applications is foreseen, e.g., in medicine, catalysis, and energy processes. For many applications, advanced materials incorporating NPs will needed, and fluidization is a convenient way to transport and mix them, or process them in some other way. Fluidized beds can also be applied to provide NPs with a thin coating, obtaining core−shell NPs. Using fluidization, it is possible to process large amount of NPs, which is convenient for applications which will require NPs on the ton-scale, such as catalysis and energy conversion and storage. We expect that both the unsolved scientific challenges and technological questions arising from novel applications will boost research in nanopowder fluidization in the coming years. Abadjieva E, van der Heijden AEDM, Creyghton YLM, van Ommen JR (2011) Fluorocarbon coatings deposited on micron-sized particles by atmospheric PECVD. Plasma Process Polym (in press), doi:10.1002/ppap.201100044 Ammendola P, Chirone R (2010) Aeration and mixing behaviours of nano-sized powders under sound vibration. Powder Technol 201(1):49–56 Auffan M, Rose J, Bottero JY, Lowry GV, Jolivet JP, Wiesner MR (2009) Towards a definition of inorganic nanoparticles from an environmental, health and safety perspective. Nat Nanotechnol 4(10):634–641 Batchelor GK, Wen CS (1982) Sedimentation in a dilute polydisperse system of interacting spheres—2. Numerical results. J Fluid Mech 124:495–528 Beetstra R, Lafont U, Nijenhuis J, Kelder EM, van Ommen JR (2009) Atmospheric pressure process for coating particles using atomic layer deposition. Chem Vap Depos 15(7–9):227–233 Bell AT (2003) The impact of nanoscience on heterogeneous catalysis. Science 299(5613):1688–1691 Bushell GC, Yan YD, Woodfield D, Raper J, Amal R (2002) On techniques for the measurement of the mass fractal dimension of aggregates. Adv Colloid Interface Sci 95(1):1–50 Butt HJ, Kappl M (2010) Surface and interfacial forces. Wiley-VCH, Weinheim Caruso F (2001) Nanoengineering of particle surfaces. Adv Mater 13(1):11–22 Castellanos A (2005) The relationship between attractive interparticle forces and bulk behaviour in dry and uncharged fine powders. Adv Phys 54(4):263–376 Castellanos A, Valverde JM, Quintanilla MAS (2005) Physics of compaction of fine cohesive particles. Phys Rev Lett 94(7):075501 Chan KW, Baird MHI, Round GF (1972) Behaviour of beds of dense particles in a horizontally oscillating liquid. Proc R Soc A Math Phys Sci 330:537–559 Chaouki J, Chavarie C, Klvana D, Pajonk G (1985) Effect of interparticle forces on the hydrodynamic behaviour of fluidized aerogels. Powder Technol 43(2):117–125 Czok GS, Werther J (2006) Liquid spray vs. gaseous precursor injection—its influence on the performance of particle coating by CVD in the fluidized bed. Powder Technol 162(2):100–110 Dosev D, Nichkova M, Dumas RK, Gee SJ, Hammock BD, Liu K, Kennedy IM (2007) Magnetic/luminescent core/shell particles synthesized by spray pyrolysis and their application in immunoassays with internal standard. Nanotechnology 18(5):55102 El-Shall MS, Schmidt-Ott A (2006) Journal of Nanoparticle Research: guest editorial. J Nanopart Res 8(3–4):299–300 Esmaeili B, Chaouki J, Dubois C (2008) An evaluation of the solid hold-up distribution in a fluidized bed of nanoparticles using radioactive densitometry and fibre optics. Can J Chem Eng 86(3):543–552 Esmaeili B, Chaouki J, Dubois C (2009) Encapsulation of nanoparticles by polymerization compounding in a gas/solid fluidized bed reactor. AIChE J 55(9):2271–2278 Espin MJ, Valverde JM, Quintanilla MAS, Castellanos A (2009) Electromechanics of fluidized beds of nanoparticles. Phys Rev E 79(1):011304 Ferguson JD, Weimer AW, George SM (2000) Atomic layer deposition of ultrathin and conformal Al2O3 films on BN particles. Thin Solid Films 371(1):95–104 Flesch J, Kerner D, Riemenschneider H, Reimert R (2008) Experiments and modeling on the deacidification of agglomerates of nanoparticles in a fluidized bed. Powder Technol 183(3):467–479 Forrest SR, Witten TA Jr (1979) Long-range correlations in smoke-particle aggregates. J Phys A 12(5):L109–L117 Foscolo PU, Gibilaro LG (1987) Fluid dynamic stability of fluidised suspensions: the particle bed model. Chem Eng Sci 42(6):1489–1500 Friedlander SK (2000) Smoke, dust, and haze—fundamentals of aerosol dynamics. Oxford University Press, Oxford Geldart D (1973) Types of gas fluidization. Powder Technol 7(5):285–292 Gundogdu O, Jenneson PM, Tuzun U (2007) Nano particle fluidisation in model 2-D and 3-D beds using high speed X-ray imaging and microtomography. J Nanopart Res 9(2):215–223 Guo Q, Li Y, Wang M, Shen W, Yang C (2006) Fluidization characteristics of SiO2 nanoparticles in an acoustic fluidized bed. Chem Eng Technol 29(1):78–86 Hakim LF, Blackson J, George SM, Weimer AW (2005a) Nanocoating individual silica nanoparticles by atomic layer deposition in a fluidized bed reactor. Chem Vap Depos 11(10):420–425 Hakim LF, Portman JL, Casper MD, Weimer AW (2005b) Aggregation behavior of nanoparticles in fluidized beds. Powder Technol 160(3):149–160 Harris AT (2008) On the vibration assisted fluidisation of silica nanoparticles. Int J Nanotechnol 5(2–3):179–194 Harrison D, Davidson JF, de Kock JW (1961) On the nature of aggregative and particulate fluidisation. Trans Inst Chem Eng 39:202–211 Huang C, Wang Y, Wei F (2008) Solids mixing behavior in a nano-agglomerate fluidized bed. Powder Technol 182(3):334–341 Huilin L, Shuyan W, Jianxiang Z, Gidaspow D, Ding J, Xiang L (2010) Numerical simulation of flow behavior of agglomerates in gas-cohesive particles fluidized beds using agglomerates-based approach. Chem Eng Sci 65(4):1462–1473 Iwadate Y, Horio M (1998) Prediction of agglomerate sizes in bubbling fluidized beds of group C powders. Powder Technol 100(2–3):223–236 Jacobsen CJH, Dahl S, Clausen BGS, Bahn S, Logadottir A, Nørskov JK (2001) Catalyst design by interpolation in the periodic table: Bimetallic ammonia synthesis catalysts [2]. J Am Chem Soc 123(34):8404–8405 Jain KK (2007) Applications of nanobiotechnology in clinical diagnostics. Clin Chem 53(11):2002–2009 Jung J, Gidaspow D (2002) Fluidization of nano-size particles. J Nanopart Res 4(6):483–497 Jung SH, Park SM, Park SH, Kim SD (2004) Surface modification, of fine powders by atmospheric pressure plasma in a circulating fluidized bed reactor. Ind Eng Chem Res 43(18):5483–5488 Kantor Y, Witten TA (1984) Mechanical stability of tenuous objects. Journal de physique Lettres 45(13):675–679 Kashyap M, Gidaspow D, Driscoll M (2008) Effect of electric field on the hydrodynamics of fluidized nanoparticles. Powder Technol 183(3):441–453 Kim M, Laine RM (2009) One-step synthesis of core-shell (Ce0.7Zr0.3O 2)x(Al2O3)1-x [(Ce 0.7Zr0.3O2)@Al2O3] nanopowders via liquid-feed flame spray pyrolysis (LF-FSP). J Am Chem Soc 131(26):9220–9229 King DM, Liang X, Zhou Y, Carney CS, Hakim LF, Li P, Weimer AW (2008) Atomic layer deposition of TiO2 films on particles in a fluidized bed reactor. Powder Technol 183(3):356–363 King DM, van Ommen JR, Pfeffer R, Weimer AW (2009) Atomic layer deposition of functional coatings on nanoparticles using a micro-jet assisted fluidized bed reactor. Paper presented at the AIChE Annual Meeting, Nashville, November 2009 King DM, van Ommen JR, Johnson S, Pfeffer R, Weimer AW (2010) Atomic layer deposition of nanoscale metal oxide layers on TiO2 nanoparticles using a micro-jet assisted fluidized bed reactor. Paper presented at the AIChE Annual Meeting, Salt Lake City, November 2010 Krames MR, Shchekin OB, Mueller-Mach R, Mueller GO, Zhou L, Harbers G, Craford MG (2007) Status and future of high-power light-emitting diodes for solid-state lighting. IEEE/OSA J Disp Technol 3(2):160–175 Kruis FE, Fissan H, Peled A (1998) Synthesis of nanoparticles in the gas phase for electronic, optical and magnetic applications—a review. J Aerosol Sci 29(5–6):511–535 Lai SK, Wang YY, Hanes J (2008) Mucus-penetrating nanoparticles for drug and gene delivery to mucosal tissues. Adv Drug Deliv Rev 61(2):158–171 Lepek D, Valverde JM, Pfeffer R, Dave RN (2010) Enhanced nanofluidization by alternating electric fields. AIChE J 56(1):54–65 Levy EK, Celeste B (2006) Combined effects of mechanical and acoustic vibrations on fluidization of cohesive powders. Powder Technol 163(1–2):41–50 Li Y, Somorjai GA (2010) Nanoscale advances in catalysis and energy applications. Nano Lett 10(7):2289–2295 Li L, King DL, Nie Z, Li XS, Howard C (2010) MgAl2O4 spinel-stabilized calcium oxide absorbents with improved durability for high-temperature CO2 capture. Energy Fuels 24(6):3698–3703 Liang X, King DM, Li P, George SM, Weimer AW (2009) Nanocoating hybrid polymer films on large quantities of cohesive nanoparticles by molecular layer deposition. AIChE J 55(4):1030–1039 Lin MY, Lindsay HM, Weitz DA, Ball RC, Klein R, Meakin P (1989) Universality in colloid aggregation. Nature 339(6223):360–362 Linsenbühler M, Wirth KE (2002) A powder on the move: coating of powder-coating particles with nanoparticle spacers by means of an electrostatic mixing process in liquid nitrogen. Eur Coat J 9:14–21 Liu H, Guo Q, Chen S (2007) Sound-assisted fluidization of SiO2 nanoparticles with different surface properties. Ind Eng Chem Res 46(4):1345–1349 Lu H, Smirniotis PG, Ernst FO, Pratsinis SE (2009) Nanostructured Ca-based sorbents with high CO2 uptake efficiency. Chem Eng Sci 64(9):1936–1943 Mandelbrot BB (1982) The fractal geometry of nature. W. H. Freeman and Company, New York Matsuda S, Hatano H, Muramoto T, Tsutsumi A (2004) Modeling for size reduction of agglomerates in nanoparticle fluidization. AIChE J 50(11):2763–2771 Mawatari Y, Ikegami T, Tatemoto Y, Noda K (2003) Prediction of agglomerate size for fine particles in a vibro-fluidized bed. J Chem Eng Jpn 36(3):277–283 Medina C, Santos-Martinez MJ, Radomski A, Corrigan OI, Radomski MW (2007) Nanoparticles: pharmacological and toxicological significance. Br J Pharmacol 150(5):552–558 Morooka S, Kusakabe K, Kobata A, Kato Y (1988) Fluidization state of ultrafine powders. J Chem Eng Jpn 21(1):41–46 Nakamura H, Watano S (2008) Fundamental particle fluidization behavior and handling of nano-particles in a rotating fluidized bed. Powder Technol 183(3):324–332 Nam CH, Pfeffer R, Dave RN, Sundaresan S (2004) Aerated vibrofluidization of silica nanoparticles. AIChE J 50(8):1776–1785 Nel A, Xia T, Mädler L, Li N (2006) Toxic potential of materials at the nanolevel. Science 311(5761):622–627 Nichols G, Byard S, Bloxham MJ, Botterill J, Dawson NJ, Dennis A, Diart V, North NC, Sherwood JD (2002) A review of the terms agglomerate and aggregate with a recommendation for nomenclature used in powder and particle characterization. J Pharm Sci 91(10):2103–2109 Oberdörster G, Oberdörster E, Oberdörster J (2005) Nanotoxicology: an emerging discipline evolving from studies of ultrafine particles. Environ Health Perspect 113(7):823–839 Pacek AW, Nienow AW (1990) Fluidisation of fine and very dense hardmetal powders. Powder Technol 60(2):145–158 Pfeffer R, Quevedo JA (2011) Systems and methods for reducing electrostatic charge in a fluidized bed. United States Patent 7,905,433 Pfeffer R, Quevedo JA, Flesch J (2008) Fluidized bed systems and methods including micro-jet flow. United States Patent Application 20080179433 Pfeffer R, Nam CH, Dave RN, Liu G, Quevedo J, Yu Q, Zhu C (2010) System and method for nanoparticle and nanoagglomerate fluidization. United States Patent 7,658,340 Puurunen RL (2005) Surface chemistry of atomic layer deposition: a case study for the trimethylaluminum/water process. J Appl Phys 97(12):1–52 Quevedo JA, Pfeffer R (2010) In situ measurements of gas fluidized nanoagglomerates. Ind Eng Chem Res 49(11):5263–5269 Quevedo J, Pfeffer R, Shen Y, Dave R, Nakamura H, Watano S (2006) Fluidization of nanoagglomerates in a rotating fluidized bed. AIChE J 52(7):2401–2412 Quevedo JA, Flesch J, Pfeffer R, Dave R (2007) Evaluation of assisting methods on fluidization of hydrophilic nanoagglomerates by monitoring moisture in the gas phase. Chem Eng Sci 62(9):2608–2622 Quevedo JA, Omosebi A, Pfeffer R (2010) Fluidization enhancement of agglomerates of metal oxide nanopowders by microjets. AIChE J 56(6):1456–1468 Quintanilla MAS, Valverde JM, Castellanos A, Lepek D, Pfeffer R, Dave RN (2008) Nanofluidization as affected by vibration and electrostatic fields. Chem Eng Sci 63(22):5559–5569 Quintanilla MAS, Valverde JM, Espin MJ, Castellanos A (2012) Electrofluidization of silica nanoparticle agglomerates. Ind Eng Chem Res 51(1):531–538 Rahman F (2009) Fluidization Characteristics of Nanoparticle Agglomerates; PhD dissertation. Monash University, Victoria Richardson JF, Zaki WN (1954) Sedimentation and fluidization: Part I. Trans Inst Chem Eng 32:35–53 Riehemann K, Schneider SW, Luger TA, Godin B, Ferrari M, Fuchs H (2009) Nanomedicine—challenge and perspectives. Angewandte Chemie—International Edition 48(5):872–897 Sanchez I, Flamant G, Gauthier D, Flamand R, Badie JM, Mazza G (2001) Plasma-enhanced chemical vapor deposition of nitrides on fluidized particles. Powder Technol 120(1–2):134–140 Sánchez-López JC, Fernández A (2000) TEM study of fractal scaling in nanoparticle agglomerates obtained by gas-phase condensation. Acta Mater 48(14):3761–3771 Schaefer DW (1989) Polymers, fractals, and ceramic materials. Science 243(4894):1023–1027 Seipenbusch M, Rothenbacher S, Kirchhoff M, Schmid HJ, Kasper G, Weber AP (2010) Interparticle forces in silica nanoparticle agglomerates. J Nanopart Res 12(6):2037–2044 Seville JPK, Willett CD, Knight PC (2000) Interparticle forces in fluidisation: a review. Powder Technol 113(3):261–268 Singh RK, Lee SM, Choi KS, Basim GB, Choi W, Chen Z, Moudgil BM (2002) Fundamentals of slurry design for CMP of metal and dielectric materials. MRS Bull 27(10):752–760 Song L, Zhou T, Yang J (2009) Fluidization behavior of nano-particles by adding coarse particles. Adv Powder Technol 20(4):366–370 Spillmann A, Sonnenfeld A, Rudolf Von Rohr P (2006) Flowability modification of fine powders by plasma enhanced chemical vapor deposition. In: 2006 NSTI Nanotechnology Conference and Trade Show—NSTI Nanotech 2006 Technical Proceedings, Boston, MA, 2006. 2006 NSTI Nanotechnology Conference and Trade Show—NSTI Nanotech 2006 Technical Proceedings, pp 315–317 Strobel R, Pratsinis SE (2007) Flame aerosol synthesis of smart nanostructured materials. J Mater Chem 17(45):4743–4756 Teleki A, Heine MC, Krumeich F, Akhtar MK, Pratsinis SE (2008a) In situ coating of flame-made TiO2 particles with nanothin SiO2 films. Langmuir 24(21):12553–12558 Teleki A, Wengeler R, Wengeler L, Nirschl H, Pratsinis SE (2008b) Distinguishing between aggregates and agglomerates of flame-made TiO2 by high-pressure dispersion. Powder Technol 181(3):292–300 Ullmann M, Friedlander SK, Schmidt-Ott A (2002) Nanoparticle formation by laser ablation. J Nanopart Res 4(6):499–509 Valverde JM, Castellanos A (2006a) Effect of vibration on agglomerate particulate fluidization. AIChE J 52(5):1705–1714 Valverde JM, Castellanos A (2006b) Fluidization of nanoparticles: a modified richardson-zaki law. AIChE J 52(2):838–842 Valverde JM, Castellanos A (2007a) Fluidization, bubbling and jamming of nanoparticle agglomerates. Chem Eng Sci 62(23):6947–6956 Valverde JM, Castellanos A (2007b) Types of gas fluidization of cohesive granular materials. Phys Rev E 75(3):031306 Valverde JM, Castellanos A (2008) Fluidization of nanoparticles: a simple equation for estimating the size of agglomerates. Chem Eng J 140(1–3):296–304 Valverde JM, Ramos A, Castellanos A, Watson PK (1998) The tensile strength of cohesive powders and its relationship to consolidation, free volume and cohesivity. Powder Technol 97(3):237–245 Valverde JM, Castellanos A, Quintanilla MAS (2001a) Effect of vibration on the stability of a gas-fluidized bed of fine powder. Phys Rev E 64(2I):213021–213028 Valverde JM, Quintanilla MAS, Castellanos A, Mills P (2001b) The settling of fine cohesive powders. Europhys Lett 54(3):329–334 Valverde JM, Quintanilla MAS, Castellanos A, Mills P (2003) Experimental study on the dynamics of gas-fluidized beds. Phys Rev E 67(1 2):163031–163035 Valverde JM, Quintanilla MAS, Castellanos A, Lepek D, Quevedo J, Dave RN, Pfeffer R (2008a) Fluidization of fine and ultrafine particles using nitrogen and neon as fluidizing gases. AIChE J 54(1):86–103 Valverde JM, Quintanilla MAS, Espin MJ, Castellanos A (2008b) Nanofluidization electrostatics. Phys Rev E 77(3):031301 Valverde JM, Pontiga F, Soria-Hoyo C, Quintanilla MAS, Moreno H, Duran FJ, Espin MJ (2011) Improving the gas-solids contact efficiency in a fluidized bed of CO2 adsorbent fine particles. Phys Chem Chem Phys 13(33):14906–14909 van Ommen JR, King DM, Weimer A, Pfeffer R, van Wachem BGM (2010a) Experiments and modelling of micro-jet assisted fluidization of nanoparticles. In: Kim SD, Kan Y, Lee JK, Seo YC (eds) Proceedings of the 13th International Conference on Fluidization. Engineering Conferences International, New York, pp 479–486 van Ommen JR, Yurteri CU, Ellis N, Kelder EM (2010b) Scalable gas-phase processes to create nanostructured particles. Particuology 8(6):572–577 Van Wachem B, Sasic S (2008) Derivation, simulation and validation of a cohesive particle flow CFD model. AIChE J 54(1):9–19 Voll M, Kleinschmit P (2000) Carbon, 6. Carbon black. Ullmann's encyclopedia of industrial chemistry. Wiley-VCH Verlag GmbH and Co. KGaA. doi:10.1002/14356007.n05_n05 Wang Y, Wei F, Jin Y, Luo T (2000) Agglomerate particulate fluidization and E-particles. Paper presented at the Proceedings of the Third Joint China/USA Chemical Engineering Conference (CUChE-3), Beijing Wang Y, Gu G, Wei F, Wu J (2002) Fluidization and agglomerate structure of SiO2 nanoparticles. Powder Technol 124(1–2):152–159 Wang XS, Palero V, Soria J, Rhodes MJ (2006a) Laser-based planar imaging of nano-particle fluidization: Part I-determination of aggregate size and shape. Chem Eng Sci 61(16):5476–5486 Wang XS, Palero V, Soria J, Rhodes MJ (2006b) Laser-based planar imaging of nano-particle fluidization: Part II-mechanistic analysis of nanoparticle aggregation. Chem Eng Sci 61(24):8040–8049 Wang SY, He YR, Lu HL, Zheng JX, Liu GD, Ding YL (2007a) Numerical simulations of flow behaviour of agglomerates of nano-size particles in bubbling and spouted beds with an agglomerate-based approach. Food Bioprod Process 85(3 C):231–240 Wang XS, Rahman F, Rhodes MJ (2007b) Nanoparticle fluidization and Geldart's classification. Chem Eng Sci 62(13):3455–3461 Wang XS, Rahman F, Rhodes MJ (2008) Application of discrete element method simulation for studying fluidization of nanoparticle agglomerates. Can J Chem Eng 86(3):514–522 Yang WC (2005) Fluidization of fine cohesive powders and nanoparticles—a review. J Chin Inst Chem Eng 36(1):1–15 Yang J, Sliva A, Banerjee A, Dave RN, Pfeffer R (2005) Dry particle coating for improving the flowability of cohesive powders. Powder Technol 158(1–3):21–33 Yu Q (2005) Gas fluidization of nanoparticles; PhD dissertation. New Jersey Institute of Technology, Newark Yu Q, Dave RN, Zhu C, Quevedo JA, Pfeffer R (2005) Enhanced fluidization of nanoparticles in an oscillating magnetic field. AIChE J 51(7):1971–1979 Zeng P, Zhou T, Yang J (2008) Behavior of mixtures of nano-particles in magnetically assisted fluidized bed. Chem Eng Process 47(1):101–108 Zhang W, Zhao M (2010) Fluidisation behaviour of silica nanoparticles under horizontal vibration. J Exp Nanosci 5(1):69–82 Zhong CJ, Maye MM (2001) Core-shell assembled nanoparticles as catalysts. Adv Mater 13(19):1507–1511 Zhou T, Li H (1999) Estimation of agglomerate size for cohesive particles during fluidization. Powder Technol 101(1):57–62 Zhu C, Liu G, Yu Q, Pfeffer R, Dave RN, Nam CH (2004) Sound assisted fluidization of nanoparticle agglomerates. Powder Technol 141(1–2):119–123 Zhu C, Yu Q, Dave RN, Pfeffer R (2005) Gas fluidization characteristics of nanoparticle agglomerates. AIChE J 51(2):426–439 We would like to thank David Valdesueiro and Kasper Kuijpers for their assistance in preparing this manuscript. Jose Manuel Valverde would like to acknowledge financial support from the Spanish Government Agency Ministerio de Ciencia y Tecnologia (contract FIS2011-25161) and Junta de Andalucia (contract FQM-5735). This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Department of Chemical Engineering, Delft University of Technology, Julianalaan 136, 2628 BL, Delft, The Netherlands J. Ruud van Ommen Department of Electronics and Electromagnetism, University of Seville, Avenida Reina Mercedes s/n, 41012, Sevilla, Spain Jose Manuel Valverde Chemical Engineering Program, School for Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ, 85287, USA Robert Pfeffer Search for J. Ruud van Ommen in: Search for Jose Manuel Valverde in: Search for Robert Pfeffer in: Correspondence to J. Ruud van Ommen. Supplementary material 1 (WMV 1,927 kb) Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. van Ommen, J.R., Valverde, J.M. & Pfeffer, R. Fluidization of nanopowders: a review. J Nanopart Res 14, 737 (2012) doi:10.1007/s11051-012-0737-4 Agglomerates Fluidized beds Assisted fluidization Modeling nanofluidization
CommonCrawl
Positive solutions for perturbations of the Robin eigenvalue problem plus an indefinite potential DCDS Home Fourier spectral approximations to the dynamics of 3D fractional complex Ginzburg-Landau equation May 2017, 37(5): 2565-2588. doi: 10.3934/dcds.2017110 Parabolic arcs of the multicorns: Real-analyticity of Hausdorff dimension, and singularities of $\mathrm{Per}_n(1)$ curves Sabyasachi Mukherjee 1,2, Jacobs University Bremen, Campus Ring 1, Bremen 28759, Germany Institute for Mathematical Sciences, Stony Brook University, Stony Brook, 11794, NY, USA Received May 2016 Revised January 2017 Published February 2017 Fund Project: The author was supported by Deutsche Forschungsgemeinschaft DFG Figure(4) The boundaries of the hyperbolic components of odd period of the multicorns contain real-analytic arcs consisting of quasi-conformally conjugate parabolic parameters. One of the main results of this paper asserts that the Hausdorff dimension of the Julia sets is a real-analytic function of the parameter along these parabolic arcs. This is achieved by constructing a complex one-dimensional quasiconformal deformation space of the parabolic arcs which are contained in the dynamically defined algebraic curves $ \mathrm{Per}_n(1)$ of a suitably complexified family of polynomials. As another application of this deformation step, we show that the dynamically natural parametrization of the parabolic arcs has a non-vanishing derivative at all but (possibly) finitely many points. We also look at the algebraic sets $ \mathrm{Per}_n(1)$ in various families of polynomials, the nature of their singularities, and the 'dynamical' behavior of these singular parameters. Keywords: Hausdorff dimension, parabolic curves, antiholomorphic dynamics, quasiconformal deformation, multicorns. Mathematics Subject Classification: Primary:37F10, 37F30, 37F35, 37F45. Citation: Sabyasachi Mukherjee. Parabolic arcs of the multicorns: Real-analyticity of Hausdorff dimension, and singularities of $\mathrm{Per}_n(1)$ curves. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2565-2588. doi: 10.3934/dcds.2017110 S. Basu, R. Pollack and M. -F. Coste-Roy, Algorithms in Real Algebraic Geometry, Algorithms and Computation in Mathematics, 2003. doi: 10.1007/978-3-662-05355-3. Google Scholar A. F. Beardon, Iteration of Rational Functions, Complex Analytic Dynamical Systems Series: Graduate Texts in Mathematics, Vol. 132, Springer-Verlag, 1991. Google Scholar W. Bergweiler and A. Eremenko, Green's function and anti-holomorphic dynamics on a torus, Proc. Amer. Math. Soc., 144 (2016), 2911-2922. doi: 10.1090/proc/13044. Google Scholar A. Bonifant, X. Buff and J. Milnor, Antipode preserving cubic maps: The fjord theorem, preprint, arXiv: 1512.01850. Google Scholar A. Bonifant, X. Buff and J. Milnor, Antipode preserving cubic maps Ⅱ: tongues and the ring locus, work in progress. Google Scholar J. Canela, N. Fagella and A. Garijo, On a family of rational perturbations of the doubling map, Journal of Difference Equations and Applications, 21 (2015), 715-741. doi: 10.1080/10236198.2015.1050387. Google Scholar M. Denker and M. Urbanski, Hausdorff and conformal measures on Julia sets with a rationally indifferent periodic point, J. London Math. Soc., 43 (1991), 107-118. doi: 10.1112/jlms/s2-43.1.107. Google Scholar N. Dobbs, Nice sets and invariant densities in complex dynamics, Math. Proc. Cambridge Philos. Soc., 150 (2011), 157-165. doi: 10.1017/S0305004110000265. Google Scholar D. S. Dummit and R. M. Foote, Abstract Algebra, 3rd Edition, John Wiley and Sons, Inc. , 2003. Google Scholar J. H. Hubbard and D. Schleicher, Multicorns are not path connected, in Frontiers in Complex Dynamics: In Celebration of John Milnor's 80th Birthday (eds. A. Bonifant, M. Lyubich and S. Sutherland), Princeton University Press, (2014), 73-102 doi: 10.1515/9781400851317-007. Google Scholar H. Inou and J. Kiwi, Combinatorics and topology of straightening maps, Ⅰ: Compactness and bijectivity, Advances in Mathematics, 231 (2012), 2666-2733. doi: 10.1016/j.aim.2012.07.014. Google Scholar H. Inou and S. Mukherjee, Non-landing parameter rays of the multicorns, Inventiones Mathematicae, 204 (2016), 869-893. doi: 10.1007/s00222-015-0627-3. Google Scholar H. Inou and S. Mukherjee, Discontinuity of straightening in antiholomorphic dynamics, arXiv: 1605.08061. Google Scholar [14] F. Kirwan, Complex Algebraic Curves, Cambridge University Press, Cambridge, 1992. doi: 10.1017/CBO9780511623929. Google Scholar [15] D. R. Mauldin and M. Urbanski, Graph Directed Markov Systems: Geometry and Dynamics of Limit Sets, Cambridge University Press, Cambridge, 2003. doi: 10.1017/CBO9780511543050. Google Scholar C. T. McMullen, Hausdorff dimension and conformal dynamics Ⅱ: Geometrically finite rational maps, Commentarii Mathematici Helvetici, 75 (2000), 535-593. doi: 10.1007/s000140050140. Google Scholar [17] J. Milnor, Dynamics in one Complex Variable, 3rd Edition, Princeton University Press, New Jersey, 2006. Google Scholar J. Milnor, Remarks on iterated cubic maps, Experiment. Math., 1 (1992), 5-24. Google Scholar J. Milnor, Singular Points of Complex Hypersurfaces, Annals of Mathematics Studies. Princeton University Press, New Jersey, 1968. Google Scholar S. Mukherjee, S. Nakane and D. Schleicher, On multicorns and unicorns Ⅱ: Bifurcations in spaces of antiholomorphic polynomials, Ergodic Theory and Dynamical Systems, to appear, 2015, http://dx.doi.org/10.1017/etds.2015.65 doi: 10.1017/etds.2015.65. Google Scholar S. Mukherjee, Orbit portraits of unicritical antiholomorphic polynomials, Conformal Geometry and Dynamics of the AMS, 19 (2015), 35-50. doi: 10.1090/S1088-4173-2015-00276-3. Google Scholar S. Nakane, Connectedness of the tricorn, Ergodic Theory and Dynamical Systems, 13 (1993), 349-356. doi: 10.1017/S0143385700007409. Google Scholar S. Nakane and D. Schleicher, On multicorns and unicorns Ⅰ: antiholomorphic dynamics, hyperbolic components, and real cubic polynomials, International Journal of Bifurcation and Chaos, 13 (2003), 2825-2844. doi: 10.1142/S0218127403008259. Google Scholar J. Rivera-Letelier, A connecting lemma for rational maps satisfying a no-growth condition, Ergodic Theory and Dynamical Systems, 27 (2007), 595-636. doi: 10.1017/S0143385706000629. Google Scholar D. Ruelle, Repellers for real analytic maps, Turbulence, Strange Attractors and Chaos, (1995), 351-359. doi: 10.1142/9789812833709_0023. Google Scholar B. Skorulski and M. Urbanski, Finer fractal geometry for analytic families of conformal dynamical systems, Dynamical Systems, 29 (2014), 369-398. doi: 10.1080/14689367.2014.903385. Google Scholar M. Urbanski, Measures and dimensions in conformal dynamics, Bull. Amer. Math. Soc., 40 (2003), 281-321. doi: 10.1090/S0273-0979-03-00985-6. Google Scholar P. Walters, A variational principle for the pressure of continuous transformations, Amer. J. Math., 97 (1979), 937-971. doi: 10.2307/2373682. Google Scholar P. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, Volume 79, Springer, 1982. Google Scholar C. T. C. Wall, Singular Points of Plane Curves, London Mathematical Society Student Texts (vol. 63), Cambridge University Press, 2004. doi: 10.1017/CBO9780511617560. Google Scholar M. Zinsmeister, Thermodynamic Formalism and Holomorphic Dynamical Systems, SMF/AMS Texts and Monographs, Volume 2,2000. Google Scholar Figure 1. $\mathcal{M}_2^*$, also known as the tricorn and the parabolic arcs on the boundary of the hyperbolic component of period 1 (in blue) Figure 2. Pictorial representation of the image of $\left[0,1\right]$ under the quasiconformal map $L_w$; for $w=1+i/8$ (top) and $w=1$ (bottom). The Fatou coordinates of $c_0$ and $f_{c_0}^{\circ k} (c_0)$ are $1/4$ and $3/4$ respectively. For $w=1+i/8$, $L_w(1/4)=1/8+i$ and $L_w(3/4)=7/8-i$, and for $w=1$, $L_w(1/4)=1/4+i$ and $L_w(3/4)=3/4-i$. Observe that $L_w$ commutes with $z\mapsto \overline{z}+1/2$ only when $w\in \mathbb{R}$ Figure 3. $\pi_2 \circ F : w \mapsto b(w)$ is injective in a neighborhood of $\widetilde{u}$ for all but possibly finitely many $\widetilde{u} \in \mathbb{R}$ Figure 4. The outer yellow curve indicates part of $\mathrm{Per}_1(1)\cap \lbrace a=\overline{b}\rbrace$, and the inner blue curve (along with the red point) indicates part of the deformation $\mathrm{Per}_1(r)\cap \lbrace a=\overline{b}\rbrace$ for some $r\in (1-\epsilon,1)$. The cusp point $c_0$ on the yellow curve is a critical point of $h_1$, i.e. a singular point of $\mathrm{Per}_1(1)$, and the red point is a critical point of $h_r$; i.e a singular point of $\mathrm{Per}_1(r)$ Shmuel Friedland, Gunter Ochs. Hausdorff dimension, strong hyperbolicity and complex dynamics. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 405-430. doi: 10.3934/dcds.1998.4.405 Hiroki Sumi, Mariusz Urbański. Bowen parameter and Hausdorff dimension for expanding rational semigroups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2591-2606. doi: 10.3934/dcds.2012.32.2591 Sara Munday. On Hausdorff dimension and cusp excursions for Fuchsian groups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2503-2520. doi: 10.3934/dcds.2012.32.2503 Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118. Jon Chaika. Hausdorff dimension for ergodic measures of interval exchange transformations. Journal of Modern Dynamics, 2008, 2 (3) : 457-464. doi: 10.3934/jmd.2008.2.457 Krzysztof Barański, Michał Wardal. On the Hausdorff dimension of the Sierpiński Julia sets. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3293-3313. doi: 10.3934/dcds.2015.35.3293 Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098 Thomas Jordan, Mark Pollicott. The Hausdorff dimension of measures for iterated function systems which contract on average. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 235-246. doi: 10.3934/dcds.2008.22.235 Vanderlei Horita, Marcelo Viana. Hausdorff dimension for non-hyperbolic repellers II: DA diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1125-1152. doi: 10.3934/dcds.2005.13.1125 Krzysztof Barański. Hausdorff dimension of self-affine limit sets with an invariant direction. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1015-1023. doi: 10.3934/dcds.2008.21.1015 Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417 Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020 Cristina Lizana, Leonardo Mora. Lower bounds for the Hausdorff dimension of the geometric Lorenz attractor: The homoclinic case. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 699-709. doi: 10.3934/dcds.2008.22.699 Paul Wright. Differentiability of Hausdorff dimension of the non-wandering set in a planar open billiard. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3993-4014. doi: 10.3934/dcds.2016.36.3993 Aline Cerqueira, Carlos Matheus, Carlos Gustavo Moreira. Continuity of Hausdorff dimension across generic dynamical Lagrange and Markov spectra. Journal of Modern Dynamics, 2018, 12: 151-174. doi: 10.3934/jmd.2018006 Davit Karagulyan. Hausdorff dimension of a class of three-interval exchange maps. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1257-1281. doi: 10.3934/dcds.2020077 Tomasz Downarowicz, Olena Karpel. Dynamics in dimension zero A survey. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1033-1062. doi: 10.3934/dcds.2018044 Manuel Fernández-Martínez, Miguel Ángel López Guerrero. Generating pre-fractals to approach real IFS-attractors with a fixed Hausdorff dimension. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1129-1137. doi: 10.3934/dcdss.2015.8.1129 Yan Huang. On Hausdorff dimension of the set of non-ergodic directions of two-genus double cover of tori. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2395-2409. doi: 10.3934/dcds.2018099 Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303
CommonCrawl
Cost yield of different treatment strategies against Clonorchis sinensis infection Men-Bao Qian1,2,3,4,5,6, Chang-Hai Zhou1,2,3,4,5, Hui-Hui Zhu1,2,3,4,5, Ying-Dan Chen1,2,3,4,5 & Xiao-Nong Zhou1,2,3,4,5,6 Infectious Diseases of Poverty volume 10, Article number: 136 (2021) Cite this article Clonorchiasis is attributed to the ingestion of raw freshwater fish harboring Clonorchis sinensis. Morbidity control is targeted through the administration of antihelminthics. This study modelled the cost yield indicated by effectiveness and utility of different treatment strategies against clonorchiasis. About 1000 participants were enrolled from each of 14 counties selected from four provincial-level administrative divisions namely Guangxi, Guangdong, Heilongjiang and Jilin in 2017. Fecal examination was adopted to detect C. sinensis infection, while behavior of ingesting raw freshwater fish was enquired. Counties were grouped into four categories based on prevalence, namely low prevalence group (< 1%), moderate prevalence group (1–9.9%), high prevalence group (10–19.9%) and very high prevalence group (≥ 20%), while population were divided into three subgroups, namely children aged below 14 years old, adult female and adult male both aged over 14 years old. The average of cost effectiveness indicated by the cost to treat single infected cases with C. sinensis and of cost utility indicated by the cost to avoid per disability-adjusted life years (DALYs) caused by C. sinensis infection was calculated. Comparisons were performed between three treatment schedules, namely individual treatment, massive and selective chemotherapy, in which different endemic levels and populations were considered. In selective chemotherapy strategy, the cost to treat single infected case in very high prevalence group was USD 10.6 in adult male, USD 11.6 in adult female, and USD 13.2 in children. The cost increased followed the decrease of endemic level. In massive chemotherapy strategy, the cost per infected case in very high prevalence group was USD 14.0 in adult male, USD 17.1 in adult female, USD 45.8 in children, which were also increased when the endemic level decreased. In individual treatment strategy, the cost was USD 12.2 in adult male, USD 15.0 in adult female and USD 41.5 in children in very high prevalence group; USD 19.2 in adult male, USD 34.0 in adult female, and USD 90.1 in children in high prevalence group; USD 30.4 in adult male, USD 50.5 in adult female and over USD 100 in children in moderate prevalence group; and over USD 400 in any population in low prevalence group. As to cost utility, the differences by treatment strategies, populations and endemic levels were similar to those in cost effectiveness. Both cost effectiveness and cost utility indicators are highly impacted by the prevalence and population, as well as the treatment schedules. Adults especially men in the areas with a prevalence over 10% should be prioritized, in which selective chemotherapy was best and massive chemotherapy was also cost effective. In moderate endemic areas, the yield is not ideal, but selective chemotherapy for adult male may also be adopted. In low endemic areas, all strategies were high costly and new strategies need to be developed. Infections with human liver fluke (Clonorchis sinensis, Opisthorchis viverrini and O. felineus) cause high burden in Asia and parts of Europe [1,2,3]. They are caused by the special dietary habit-ingesting raw or undercooked freshwater fish. Especially, an estimation of 15 million people is infected with C. sinensis across China, Republic of Korea, northern Vietnam and part of Russia [4,5,6]. Diverse morbidities are associated with C. sinensis infection, among which gallstone, cholecystitis, cholangitis, and cholangiocarcinoma are most important [7,8,9,10]. An average loss of 7.5% in health could be attributable to C. sinensis infection [11]. High burden due to severe morbidity and availability of antihelminthics lead to the target of morbidity control through chemotherapy [12,13,14]. Preventive chemotherapy effectively decreases the infection and intensity. A dosage of 75 mg/kg praziquantel divided into three doses in 1 day is usually applied for both individual and population treatment [13, 15, 16]. Two different strategies could be chosen in preventive chemotherapy, namely mass chemotherapy for whole communities and selective one for people at risk in the communities [13]. Usually, persons ingesting raw freshwater fish frequently are considered at risk [17]. Preventive chemotherapy is not based on the individual definitive diagnosis, and thus it is usually applied when the prevalence reaches a threshold. Compared to preventive chemotherapy, individual treatment is used when infection is ascertained through definite diagnosis, i.e. detection of eggs in feces [15, 16]. By now, only a few studies have been implemented to compare the cost effectiveness of different treatment schedules (individual treatment, massive and selective chemotherapy) against human liver fluke infections [18]. No study has yet considered the impact from different populations (gender and ages). Especially, no cost utility analysis based on disability-adjusted life years (DALYs) has yet been implemented. In a previous study, we had demonstrated the quantitative contribution of ingesting raw freshwater fish to C. sinensis infection and the screening performance of detecting C. sinensis cases through raw-freshwater fish-eating practice [17, 19]. Here, the data were used to compare the cost effectiveness and cost utility of three different treatment schedules, in which the impact from prevalence levels and populations was also considered. Study areas and participants The study areas had been described elsewhere [17, 19]. In brief, four major clonorchiasis endemic provincial-level administrative divisions (PLADs) in China, namely Guangxi and Guangdong in southeastern regions and Heilongjiang and Jilin in northeastern areas, were selected. Correspondingly, 6, 3, 5 and 3 counties were selected from each PLAD. In each county, five village were selected and then about 200 villagers from each village were included in the survey. Investigation procedures In 2017, each participant was asked to provided one fresh feces, which was then transferred to local medical organization and examined by technicians using the Kato-Katz method with a template of 41.7 mg [20, 21]. Two smears were prepared for each sample. Each participant was also inquired of the habit of ingesting raw freshwater fish. The cost was based on the average unit on each item which had been applied in filed. The cost contained the expenditure on fecal examination, behavioral screening, purchase and delivery of drugs (praziquantel). The cost on fecal examination per person (including the labor expenditure and material expenditure) was CNY 20 (USD 3.10), while the cost on behavioral screening was CNY 1 (USD 0.16). The cost on drugs was CNY 170 per bottle including 100 tablets (200 mg individually), namely USD 0.26 per tablet. The delivery of drugs costed CNY 2 (USD 0.31) individually. Data were analyzed in SPSS for Windows (version 11.0; SPSS Institute, Inc., Chicago, USA) and Microsoft Excel (version 2016; Microsoft Corporation, Redmond, USA). One county was excluded because people reported ingestion of marine fish and the prevalence of C. sinensis was 0. Another two counties were also excluded because no C. sinensis infection was detected. Finally, 14 counties were included in this study, and they were classified into four groups based on prevalence, namely low prevalence group (< 1%), moderate prevalence group (1–9.9%), high prevalence group (10–19.9%) and very high prevalence group (≥ 20%) level. Population was divided into three categories, namely children (≤ 14 years old), adult female (> 14 years old) and adult male (> 14 years old) [22]. An average body weight was 35 kg in children, 55 kg in adult female and 65 kg in adult male [23]. A total of 75 mg/kg praziquantel divided into three doses in 1 day was set to be administrated in all three treatment schedules [13, 15, 16]. This study modelled the cost effectiveness and cost utility stratified by treatment schedules, counties, endemic levels, and populations. DALYs was introduced as utility indicator, which includes years of life living with a disability (YLDs) and years of life lost (YLLs) [24]. $$DALYs=YLDs+YLLS$$ $$YLDs=N \times P \times D$$ where N stands for community population, P the prevalence of C. sinensis and D the average disability weight of C. sinensis infection. $$YLLs=N \times P \times I \times (L-a)$$ where N stands for community population, P the prevalence of C. sinensis, I the incidence of cholangiocarcinoma attributed to C. sinensis infection, L the standard expectation of the life and a the age at death of those with cholangiocarcinoma. Eggs per gram of feces (EPG) was calculated by multiplying the average egg number of two smears by 24. EPG was logarithmically transformed and the average was calculated for different groups, which was then inversely logarithmically transformed to capture the geometric mean of EPG (GMEPG). Then, the disability weight was captured based on the equation of D = 0.0362ln(GMEPG) − 0.1269 [11]. Because minus figures occurred when the equation was extrapolated to low GMEPG, thus a lower limit was set as 0.022 [11]. This is reasonable, because the loss of health in low infection intensity was completed due to diarrhea and pain in the right upper quadrant, which are common in those infected with C. sinensis. The period of disease was set as 1 year, because the prevalence instead of incidence was used in this study. To calculate the YLLs, death due to C. sinensis infection was completely attributed to cholangiocarcinoma, with an incidence of 25/100 000 and 35/100 000 in female and male respectively [25]. Because the progress of cholangiocarcinoma is chronic and thus YLLs was not considered in children. The life expectancy was 79.92 in female and 74.52 in male [26], when the onset age of cholangiocarcinoma was referred to liver cancer namely 62.35 in female and 68.99 in male [27]. Because the prognosis is very poor in cholangiocarcinoma, the death age was set equally to onset age of cholangiocarcinoma. The cost of individual treatment contained fecal examination, purchase and delivery of drugs for those with C. sinensis infection, the cost of massive chemotherapy contained purchase and delivery of drugs for whole populations, and that of selective chemotherapy contained behavioral screening and purchase and delivery of drugs for those ingesting raw freshwater fish. Cost to treat individual infected case with C. sinensis was used as the indicator in cost effectiveness analysis, while cost to avoid one YLDs, YLLs and DALYs as the indicator in cost utility analysis. The average of cost effectiveness and cost utility was calculated and compared, which was stratified by the three treatment schedules (i.e., individual treatment, massive and selective chemotherapy), endemic levels and populations. The composition of cost was also analyzed, with the cost in each category divided by the overall cost. Epidemiological profiles The prevalence of C. sinensis infection and the proportion of persons ingesting raw freshwater fish refer to Additional file 1: Table S1 [17, 19]. The epidemiological profiles of C. sinensis prevalence and raw-freshwater fish-eating practice were similar in different endemic levels, namely higher prevalence of C. sinensis and proportion of raw-fish-eating practice in male compared to female and in elder people compared to children. Overall, the DALYs per 1000 was 6.4, ranging from 0.2 to 34.3 in different counties. It was 0.5 in children ranging from 0 to 7.3, 4.6 in adult female ranging from 0 to 26.0, and 10.5 in adult male ranging from 0.2 to 42.9 (Additional file 1: Table S2). In adult female, the YLDs per 1000 was 4.3 and the YLLs per 1000 was 0.3. In adult male, the YLDs per 1000 was 9.7 and the YLLs per 1000 was 0.8. In very high prevalence group, cost to treat single infected case in selective chemotherapy was USD 10.6 in adult male, USD 11.6 in adult female, USD 13.2 in children and USD 11.0 overall (Table 1 and Fig. 1). Correspondingly, the cost increased to USD 22.6, USD 32.7, USD 58.9 and USD 25.6 in high prevalence group. In moderate prevalence group, the cost was USD 32.8 in adult male and USD 39.7 in adult female, while it exceeded USD 200.0 in children. In low prevalence group, the cost per infected cases exceeded USD 200.0 in all populations. Table 1 Cost effectiveness (USD) of different treatment strategies against Clonorchis sinensis infection Cost effectiveness of different treatment strategies against Clonorchis sinensis infection. na not available In massive chemotherapy, the cost in very high prevalence group was USD 14.0 in adult male, USD 17.1 in adult female, USD 45.8 in children and USD 15.3 overall (Table 1 and Fig. 1). In high prevalence group, the cost was USD 29.2, USD 52.4, USD 105.0 and USD 39.0 respectively. In moderate prevalence group, the cost was USD 53.5 and USD 91.3 in adult male and female, when it exceeded USD 300.0 in children. In low prevalence group, the cost exceeded USD 800.0 in any population. In individual treatment, the cost in very high prevalence group was USD 12.2 in adult male, USD 15.0 in adult female, USD 41.5 in children and USD 13.4 overall (Table 1 and Fig. 1). The cost doubled nearly in high prevalence group compared to those in very high group. In moderate prevalence group, the cost further increased to USD 30.4 in adult male, USD 55.0 in adult female and USD 290.4 in children and USD 46.8 overall. In low prevalence group, the cost was over USD 400.0 in all populations. Cost utility In very high prevalence group, cost to avoid per DALYs in selective chemotherapy was USD 172.9 in adult male, USD 223.3 in adult female, USD 337.1 in children and USD 189.4 overall (Table 2 and Fig. 2). Correspondingly, the cost increased to USD 411.0, USD 696.3, USD 2678.4 and USD 488.0 in high prevalence group. In moderate prevalence group, the cost was USD 789.8 in adult male and USD 1107.9 in adult female, while it exceeded USD 1750.0 in children. In low prevalence group, the cost per infected cases exceeded USD 3900.0 in all populations. Table 2 Cost utility (USD) of different treatment strategies against Clonorchis sinensis infection Cost utility of different treatment strategies against Clonorchis sinensis infection. na not available In massive chemotherapy, the cost in very high prevalence group was USD 229.0 in adult male, USD 329.9 in adult female, USD 1171.6 in children and USD 265.5 overall (Table 2 and Fig. 2). In high prevalence group, the cost was USD 517.9, USD 1355.4, over USD 4700.0 and USD 783.5, correspondingly. In moderate prevalence group, the cost exceeded over USD 1250.0 in all groups, when it was over USD 17 500.0 in any population in low prevalence group. In individual treatment, the cost in very high prevalence group was USD 199.1 in adult male, USD 289.0 in adult female, USD 1060.1 in children and USD 231.7 overall (Table 2 and Fig. 2). In very high prevalence group, the cost increased to USD 340.3, USD 879.9, USD 4049.6 and USD 526.2, respectively. In moderate prevalence group, the cost further increased to USD 734.2 in adult male and over USD 1000.0 in other populations. In low prevalence group, the cost was over USD 8000.0 in all populations. Composition of cost In individual treatment, the overall composition was 81.6% in diagnosis (fecal examination), 17.4% in purchase of drugs, and another 1.0% in drug delivery (Table 3). The composition of diagnosis was highest in children (98.2%), followed by adult female (84.1%) and then adult male (74.1%). The percentage of diagnosis in overall population reached 57.2% in very high prevalence group, 78.3% in high prevalence group, 87.9% in moderate prevalence group and 99.1% in low prevalence group. Table 3 Cost composition of different treatment strategies against Clonorchis sinensis infection In massive treatment, the overall composition was 94.7% in purchase of drugs and another 5.3% in drug delivery (Table 3). The cost of purchase of drug was 91.8% in children, 94.6% in adult female and 95.4% in adult male. Because all persons in all populations received treatment and the cost in purchase and delivery of drugs was same, thus the composition didn't vary by prevalence in any single population. However, the cost changed a little in overall population by counties because of the different structure in population and different body weight in different populations. In selective chemotherapy, the overall composition was 88.0% in purchase of drugs, 7.5% in diagnosis (behavioral screening), and another 4.5% in drug delivery (Table 3). The composition of purchase of drugs was highest in male (90.7%), followed by female (86.3%) and then children (33.0%). The percentage of purchase of drugs in overall population reached 91.7% in very high prevalence group, 89.0% in high prevalence group, 88.8% in moderate prevalence group and 55.3% in low prevalence group. Adult worms of C. sinensis parasitize in human bodies for decades of years [28]. Thus, drug treatment is necessary to control the morbidity and eliminate the infection, which is nowadays also the mainstream intervention against clonorchiasis and other human liver fluke infections [29]. Treatment strategies with high-cost yield are needed in massive control activities [3]. Through large sample, this study demonstrated the cost effectiveness and cost utility of three treatment schedules in different endemic levels and populations. Both cost effectiveness and cost utility were impacted significantly by different treatment strategies, including the endemic levels, targeted populations and treatment schedules. In this study, both effectiveness and utility indicators were modelled. To our knowledge, no study has yet explored the economic evaluation in term of DALYs in treatment of clonorchiasis. Overall, the evaluation indicated by DALYs is more comprehensive. On one hand, both prevalence and infection intensity are considered in DALYs. Infection intensity indicates the worm burden [30], which is significantly related to the morbidity [11, 31]. On the other hand, not only disability but also death are included in DALYs, which is very important because YLLs could also be caused in clonorchiasis due to cholangiocarcinoma. However, the overall performance in different treatment strategies is similar in both cost effectiveness and cost utility, because high prevalence usually indicates high infection intensity. The higher the prevalence is, the more cases with C. sinensis will be treated, which indicates more cases in share of the huge cost on fecal examination in individual treatment and on drugs in massive and selective chemotherapy. Additionally, in selective chemotherapy, the cost yield was also impacted by the performance of behavioral screening, which is influenced by many factors [17]. In particularly, environmental contamination and subsequently infection in freshwater fish as well as control activities vary by areas [32,33,34]. However, the screening performance was overall high in those areas with high prevalence (Additional file 1: Table S1). The different cost yield in different populations was essentially attributable to the difference in prevalence. Because clonorchiasis shows a significantly differential distribution in different genders and ages due to difference in ingesting raw freshwater fish [5, 25, 31]. Adult male shows a higher prevalence compared to adult female and both show higher prevalence compared to children. Thus, in individual treatment and massive chemotherapy, the cost effectiveness in adult male in high endemic level (overall 10–19.9%) was even preferable to that in children in very high endemic level (over 20%), because the prevalence was 23.1% in the former and 8.2% in the latter. In the same endemic level and same population, cost yield was usually higher in selective chemotherapy compared to individual treatment and massive chemotherapy. Because the cost on fecal examination in individual treatment and drugs in massive chemotherapy was huge, which could be verified by the cost composition, while high screening performance by raw-freshwater fish-eating practice avoided such cost [17]. Especially, the difference between three treatment schedules was very smaller in high endemic level compared to those in low endemic level. When the cost yield in selective chemotherapy declined due to the decreasing performance of screening cases by raw-fish-eating practice in low endemic areas, the cost effectiveness in other two treatment schedules decreased more due to the decline in prevalence. Thus, the difference between selective chemotherapy and another two treatment schedules enlarged when the prevalence became low. However, it was demonstrated the cost yield is not acceptable in any treatment strategy when the prevalence is less than 1%. Thus, new techniques are expected to increase the cost yield in such case [3]. In this study, only monetary cost was considered, when other factors should not be neglected. Individual treatment lies on the definite individual diagnosis. The Kato-Katz method is widely applied because of its simplicity [20, 21]. However, it still takes much time to collect samples, and prepare and examine the smears. For example, it was estimated that the average time to collect a feces sample and perform a single or duplicate Kato-Katz thick smears is about 20 min and 27 min respectively [35]. Obviously, the cost of labor resources is huge. Furthermore, the availability of enough technicians in large field surveys is also challenging. Additionally, it should also be considered that in low endemic situations (low prevalent areas and populations), the diagnostic sensitivity of Kato-Katz method decreases [36]. On the comparison, the time spent in screening whole population for treatment in selective chemotherapy and delivery of drugs to whole population in massive chemotherapy is significantly less. On the other hand, all infected cases could be treated in both individual treatment and massive chemotherapy regardless of the incompliance and the possible low sensitivity of fecal examination in low prevalence, while it is hard to cover all infected cases in selective chemotherapy because the sensitivity of behavioral screening is usually less than 100%. However, a higher cost yield in selective chemotherapy indicates more cases to be treated in given resource. It must be noticed this study advocates the distribution of treatment resources to prioritized areas and populations, which doesn't indicate the unimportance of low endemic areas and low prevalent population (i.e., children). New techniques should be developed to detect the cases in these areas and populations with low prevalence. The drug-taking compliance probably varies in different treatment schedules, endemic levels and populations, which was not considered in this study and deserves to be explored in future. Additionally, drug efficacy may also vary in different endemic levels and populations due to difference in infection intensity, which also needs to be explored. This study had several limitations. First, the indicator of cost effectiveness and utility are both deterministic without confidential interval, because the prevalence, diagnostic performance of fecal examination and behavioral screening as well as the cost were all deterministic in this study. In future studies, the uncertainty in diagnosis and the difference on cost by areas should be considered. Second, only short-term effectiveness and utility were modelled. The screening performance of behavior and the compliance of drug-taking after multi-round treatment, and other factors had not be considered. Transmission dynamic model is expected to further illuminate them in future. This study demonstrates a significant variation of cost yield by different treatment strategies including three treatment schedules, four endemic levels, and three types of populations. Although cost yield in high endemic areas (over 10% in prevalence) is approaching, chemotherapy is more acceptable because of the huge labor input in diagnosis in individual treatment. Additionally, selective chemotherapy demonstrates a little higher yield compared to massive chemotherapy. Relatively, the cost yield is higher in adults especially men compared to children. In moderate endemic areas (1–9.9% in prevalence), the cost yield in different treatment schedules all decreases, but selective chemotherapy targeting adults may still be considered. However, in low endemic areas (< 1% in prevalence), although selective chemotherapy shows higher cost yield compared to other two schedules, the cost is too high to be acceptable in any strategy. Thus, new techniques should be explored. Overall, to be cost effective, high endemic areas and adults especially men could be prioritized, and chemotherapy especially the selective one is of first choice. All data supporting the findings of this study are included in the article and additional file. YLDs: Life living with a disability YLLs: Years of life lost EPG: Eggs per gram of feces GMEPG: Geometric mean of EPG Harrington D, Lamberton PHL, McGregor A. Human liver flukes. Lancet Gastroenterol Hepatol. 2017;2:680–9. Qian MB, Utzinger J, Keiser J, Zhou XN. Clonorchiasis. Lancet. 2016;387:800–10. Qian MB, Zhou XN. Human liver flukes in China and ASEAN: time to fight together. PLoS Negl Trop Dis. 2019;13:e0007214. Qian MB, Chen YD, Yan F. Time to tackle clonorchiasis in China. Infect Dis Poverty. 2013;2:4. Nguyen TTB, Dermauw V, Dahma H, Bui DT, Le TTH, Phi NTT, et al. Prevalence and risk factors associated with Clonorchis sinensis infections in rural communities in northern Vietnam. PLoS Negl Trop Dis. 2020;14:e0008483. Hong ST, Yong TS. Review of successful control of parasitic infections in Korea. Infect Chemother. 2020;52:427–40. Qiao T, Ma RH, Luo XB, Luo ZL, Zheng PM. Cholecystolithiasis is associated with Clonorchis sinensis infection. PLoS One. 2012;7:e42471. Qian MB, Zhou XN. Global burden of cancers attributable to liver flukes. Lancet Glob Health. 2017;5:e139. Bouvard V, Baan R, Straif K, Grosse Y, Secretan B, El Ghissassi F, et al. A review of human carcinogens—part B: biological agents. Lancet Oncol. 2009;10:321–2. Qian MB, Li HM, Jiang ZH, Yang YC, Lu MF, Wei K, et al. Severe hepatobiliary morbidity is associated with Clonorchis sinensis infection: the evidence from a cross-sectional community study. PLoS Negl Trop Dis. 2021;15:e0009116. Qian MB, Chen YD, Fang YY, Xu LQ, Zhu TJ, Tan T, et al. Disability weight of Clonorchis sinensis infection: captured from community study and model simulation. PLoS Negl Trop Dis. 2011;5:e1377. Chen YD, Li HZ, Xu LQ, Qian MB, Tian HC, Fang YY, et al. Effectiveness of a community-based integrated strategy to control soil-transmitted helminthiasis and clonorchiasis in the People's Republic of China. Acta Trop. 2020;214:105650. Choi MH, Park SK, Li Z, Ji Z, Yu G, Feng Z, et al. Effect of control strategies on prevalence, incidence and re-infection of clonorchiasis in endemic areas of China. PLoS Negl Trop Dis. 2010;4:e601. Keiser J, Utzinger J. Chemotherapy for major food-borne trematodes: a review. Expert Opin Pharmacother. 2004;5:1711–26. Hong ST, Rim HJ, Min DY, Li X, Xu J, Feng Z, et al. Control of clonorchiasis by repeated treatments with praziquantel. Korean J Parasitol. 2001;39:285–92. Hong ST, Yoon K, Lee M, Seo M, Choi MH, Sim JS, et al. Control of clonorchiasis by repeated praziquantel treatment and low diagnostic efficacy of sonography. Korean J Parasitol. 1998;36:249–54. Qian MB, Jiang ZH, Ge T, Wang X, Zhou CH, Zhu HH, et al. Rapid screening of Clonorchis sinensis infection: performance of a method based on raw-freshwater fish-eating practice. Acta Trop. 2020;207:105380. Yajima A, Cong DT, Trung DD, Cam TD, Montresor A. Cost comparison of rapid questionnaire screening for individuals at risk of clonorchiasis in low- and high-prevalence communities in northern Vietnam. Trans R Soc Trop Med Hyg. 2009;103:447–51. Qian MB, Jiang ZH, Ge T, Wang X, Deng ZH, Zhou CH, et al. Association of raw-freshwater fish-eating practice with the infection of Clonorchis sinensis. Chin J Parasitol Parasit Dis. 2019;37:296–301 (In Chinese). Hong ST, Choi MH, Kim CH, Chung BS, Ji Z. The Kato-Katz method is reliable for diagnosis of Clonorchis sinensis infection. Diagn Microbiol Infect Dis. 2003;47:345–7. Qian MB, Yap P, Yang YC, Liang H, Jiang ZH, Li W, et al. Accuracy of the Kato-Katz method and formalin-ether concentration technique for the diagnosis of Clonorchis sinensis, and implication for assessing drug efficacy. Parasit Vectors. 2013;6:314. Qian MB, Chen YD, Fang YY, Tan T, Zhu TJ, Zhou CH, et al. Epidemiological profile of Clonorchis sinensis infection in one community, Guangdong, People's Republic of China. Parasit Vectors. 2013;6:194. National Health and Family Planning Commission of the People's Republic of China. Report on nutritional status and chronic diseases among Chinese population (2015). Beijing: People's Medical Publishing House; 2015. King CH, Bertino AM. Asymmetries of poverty: why global burden of disease valuations underestimate the burden of neglected tropical diseases. PLoS Negl Trop Dis. 2008;2:e209. Qian MB, Chen YD, Liang S, Yang GJ, Zhou XN. The global epidemiology of clonorchiasis and its relation with cholangiocarcinoma. Infect Dis Poverty. 2012;1:4. Collaborators GBDM. Global, regional, and national age-sex-specific mortality and life expectancy, 1950–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392:1684–735. Zeng HM, Cao MM, Zheng RS, Zhangs SW, Cai JQ, Qu CF, et al. Trend analysis of age of diagnosis for liver cancer in cancer registry areas of China, 2000–2014. Chin J Prev Med. 2018;52:573–8 (In Chinese). Attwood HD, Chou ST. The longevity of Clonorchis sinensis. Pathology. 1978;10:153–6. Prichard RK, Basanez MG, Boatin BA, McCarthy JS, Garcia HH, Yang GJ, et al. A research agenda for helminth diseases of humans: intervention for control and elimination. PLoS Negl Trop Dis. 2012;6:e1549. Kim JH, Choi MH, Bae YM, Oh JK, Lim MK, Hong ST. Correlation between discharged worms and fecal egg counts in human clonorchiasis. PLoS Negl Trop Dis. 2011;5:e1339. Lee SE, Shin HE, Lee MR, Kim YH, Cho SH, Ju JW. Risk factors of Clonorchis sinensis human infections in endemic areas, Haman-Gun, Republic of Korea: a case-control study. Korean J Parasitol. 2020;58:647–52. Chen D, Chen J, Huang J, Chen X, Feng D, Liang B, et al. Epidemiological investigation of Clonorchis sinensis infection in freshwater fishes in the Pearl River Delta. Parasitol Res. 2010;107:835–9. Zhang Y, Chang QC, Zhang Y, Na L, Wang WT, Xu WW, et al. Prevalence of Clonorchis sinensis infection in freshwater fishes in northeastern China. Vet Parasitol. 2014;204:209–13. Zhang Y, Gong QL, Lv QB, Qiu YY, Wang YC, Qiu HY, et al. Prevalence of Clonorchis sinensis infection in fish in South-East Asia: a systematic review and meta-analysis. J Fish Dis. 2020;43:1409–18. Speich B, Knopp S, Mohammed KA, Khamis IS, Rinaldi L, Cringoli G, et al. Comparative cost assessment of the Kato-Katz and FLOTAC techniques for soil-transmitted helminth diagnosis in epidemiological surveys. Parasit Vectors. 2010;3:71. Qian MB, Zhuang SF, Zhu SQ, Deng XM, Li ZX, Zhou XN. Improving diagnostic performance of the Kato-Katz method for Clonorchis sinensis infection through multiple samples. Parasit Vectors. 2019;12:336. We thank the staff in provincial-level Centers for Disease Control and Prevention in Guangxi, Heilongjiang, Jilin and Guangdong, and the staff from county-level Centers for Disease Control and Prevention for their help in the investigation. This study was supported by the UBS Optimus Foundation (Grant No. 9051). M-BQ and X-NZ were financially supported by the Forth Round of Three-Year Public Health Action Plan (2015–2017) in Shanghai, China (Grant No. GWTD2015S06). National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention, Shanghai, China Men-Bao Qian, Chang-Hai Zhou, Hui-Hui Zhu, Ying-Dan Chen & Xiao-Nong Zhou Chinese Center for Tropical Diseases Research, Shanghai, China Key Laboratory of Parasite and Vector Biology, National Health Commission, Shanghai, China National Center for International Research on Tropical Diseases, Ministry of Science and Technology, Shanghai, China WHO Collaborating Center for Tropical Diseases, Shanghai, China School of Global Health, Chinese Center for Tropical Diseases Research, Shanghai Jiao Tong University School of Medicine, Shanghai, China Men-Bao Qian & Xiao-Nong Zhou Men-Bao Qian Chang-Hai Zhou Hui-Hui Zhu Ying-Dan Chen M-BQ and X-NZ designed the study. M-BQ, C-HZ, H-HZ and Y-DC collected the data. M-BQ analyzed the data. M-BQ wrote the first draft of the paper. All authors read and approved the final manuscript. The study was approved by the Ethics Committee in the National Institute of Parasitic Diseases, China CDC. The objectives, procedures and potential risks of this study were orally explained and informed to all participants. A written consent form was also obtained with signature of the participant or his/her guardian for a child. Xiao-Nong is an Editor-in-Chief of the journal Infectious Diseases of Poverty. He was not involved in the peer-review or handling of the manuscript. The authors have no other competing interests to disclose. Epidemiological profiles of Clonorchis sinensis infection and raw-freshwater fish-eating practice. Table S2. Disability-adjusted life years caused by Clonorchis sinensis infection by counties, endemic levels and populations. Qian, MB., Zhou, CH., Zhu, HH. et al. Cost yield of different treatment strategies against Clonorchis sinensis infection. Infect Dis Poverty 10, 136 (2021). https://doi.org/10.1186/s40249-021-00917-1 Received: 30 May 2021 Clonorchis sinensis Treatment strategy
CommonCrawl
Conclusion Validity Inferential Statistics Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data. Descriptive statistics are typically distinguished from inferential statistics. With descriptive statistics you are simply describing what is or what the data shows. With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study. Thus, we use inferential statistics to make inferences from our data to more general conditions; we use descriptive statistics simply to describe what's going on in our data. Descriptive Statistics are used to present quantitative descriptions in a manageable form. In a research study we may have lots of measures. Or we may measure a large number of people on any measure. Descriptive statistics help us to simplify large amounts of data in a sensible way. Each descriptive statistic reduces lots of data into a simpler summary. For instance, consider a simple number used to summarize how well a batter is performing in baseball, the batting average. This single number is simply the number of hits divided by the number of times at bat (reported to three significant digits). A batter who is hitting .333 is getting a hit one time in every three at bats. One batting .250 is hitting one time in four. The single number describes a large number of discrete events. Or, consider the scourge of many students, the Grade Point Average (GPA). This single number describes the general performance of a student across a potentially wide range of course experiences. Every time you try to describe a large set of observations with a single indicator you run the risk of distorting the original data or losing important detail. The batting average doesn't tell you whether the batter is hitting home runs or singles. It doesn't tell whether she's been in a slump or on a streak. The GPA doesn't tell you whether the student was in difficult courses or easy ones, or whether they were courses in their major field or in other disciplines. Even given these limitations, descriptive statistics provide a powerful summary that may enable comparisons across people or other units. Univariate Analysis Univariate analysis involves the examination across cases of one variable at a time. There are three major characteristics of a single variable that we tend to look at: the distribution the central tendency the dispersion In most situations, we would describe all three of these characteristics for each of the variables in our study. The distribution is a summary of the frequency of individual values or ranges of values for a variable. The simplest distribution would list every value of a variable and the number of persons who had each value. For instance, a typical way to describe the distribution of college students is by year in college, listing the number or percent of students at each of the four years. Or, we describe gender by listing the number or percent of males and females. In these cases, the variable has few enough values that we can list each one and summarize how many sample cases had the value. But what do we do for a variable like income or GPA? With these variables there can be a large number of possible values, with relatively few people having each one. In this case, we group the raw scores into categories according to ranges of values. For instance, we might look at GPA according to the letter grade ranges. Or, we might group income into four or five ranges of income values. Under 35 years old 9% 66+ 6% One of the most common ways to describe a single variable is with a frequency distribution. Depending on the particular variable, all of the data values may be represented, or you may group the values into categories first (e.g., with age, price, or temperature variables, it would usually not be sensible to determine the frequencies for each value. Rather, the value are grouped into ranges and the frequencies determined.). Frequency distributions can be depicted in two ways, as a table or as a graph. The table above shows an age frequency distribution with five categories of age ranges defined. The same frequency distribution can be depicted in a graph as shown in Figure 1. This type of graph is often referred to as a histogram or bar chart. Figure 1. Frequency distribution bar chart. Distributions may also be displayed using percentages. For example, you could use percentages to describe the: percentage of people in different income levels percentage of people in different age ranges percentage of people in different ranges of standardized test scores Central Tendency The central tendency of a distribution is an estimate of the "center" of a distribution of values. There are three major types of estimates of central tendency: The Mean or average is probably the most commonly used method of describing central tendency. To compute the mean all you do is add up all the values and divide by the number of values. For example, the mean or average quiz score is determined by summing all the scores and dividing by the number of students taking the exam. For example, consider the test score values: The sum of these 8 values is 167, so the mean is 167/8 = 20.875. The Median is the score found at the exact middle of the set of values. One way to compute the median is to list all scores in numerical order, and then locate the score in the center of the sample. For example, if there are 500 scores in the list, score #250 would be the median. If we order the 8 scores shown above, we would get: There are 8 scores and score #4 and #5 represent the halfway point. Since both of these scores are 20, the median is 20. If the two middle scores had different values, you would have to interpolate to determine the median. The Mode is the most frequently occurring value in the set of scores. To determine the mode, you might again order the scores as shown above, and then count each one. The most frequently occurring value is the mode. In our example, the value 15 occurs three times and is the model. In some distributions there is more than one modal value. For instance, in a bimodal distribution there are two values that occur most frequently. Notice that for the same set of 8 scores we got three different values (20.875, 20, and 15) for the mean, median and mode respectively. If the distribution is truly normal (i.e., bell-shaped), the mean, median and mode are all equal to each other. Dispersion refers to the spread of the values around the central tendency. There are two common measures of dispersion, the range and the standard deviation. The range is simply the highest value minus the lowest value. In our example distribution, the high value is 36 and the low is 15, so the range is 36 - 15 = 21. The Standard Deviation is a more accurate and detailed estimate of dispersion because an outlier can greatly exaggerate the range (as was true in this example where the single outlier value of 36 stands apart from the rest of the values. The Standard Deviation shows the relation that set of scores has to the mean of the sample. Again lets take the set of scores: to compute the standard deviation, we first find the distance between each value and the mean. We know from above that the mean is 20.875. So, the differences from the mean are: 15 - 20.875 = -5.875 21 - 20.875 = +0.125 36 - 20.875 = 15.125 Notice that values that are below the mean have negative discrepancies and values above it have positive ones. Next, we square each discrepancy: -5.875 * -5.875 = 34.515625 -0.875 * -0.875 = 0.765625 +0.125 * +0.125 = 0.015625 15.125 * 15.125 = 228.765625 +4.125 * +4.125 = 17.015625 Now, we take these "squares" and sum them to get the Sum of Squares (SS) value. Here, the sum is 350.875. Next, we divide this sum by the number of scores minus 1. Here, the result is 350.875 / 7 = 50.125. This value is known as the variance. To get the standard deviation, we take the square root of the variance (remember that we squared the deviations earlier). This would be SQRT(50.125) = 7.079901129253. Although this computation may seem convoluted, it's actually quite simple. To see this, consider the formula for the standard deviation: $$ \sqrt{\frac{\sum(X-\bar{X})^2}{n-1}} $$ X is each score, X̄ is the mean (or average), n is the number of values, Σ means we sum across the values. In the top part of the ratio, the numerator, we see that each score has the the mean subtracted from it, the difference is squared, and the squares are summed. In the bottom part, we take the number of scores minus 1. The ratio is the variance and the square root is the standard deviation. In English, we can describe the standard deviation as: the square root of the sum of the squared deviations from the mean divided by the number of scores minus one. Although we can calculate these univariate statistics by hand, it gets quite tedious when you have more than a few values and variables. Every statistics program is capable of calculating them easily for you. For instance, I put the eight scores into SPSS and got the following table as a result: Mean 20.8750 Median 20.0000 Mode 15.00 Standard Deviation 7.0799 Variance 50.1250 Range 21.00 which confirms the calculations I did by hand above. The standard deviation allows us to reach some conclusions about specific scores in our distribution. Assuming that the distribution of scores is normal or bell-shaped (or close to it!), the following conclusions can be reached: approximately 68% of the scores in the sample fall within one standard deviation of the mean approximately 95% of the scores in the sample fall within two standard deviations of the mean approximately 99% of the scores in the sample fall within three standard deviations of the mean For instance, since the mean in our example is 20.875 and the standard deviation is 7.0799, we can from the above statement estimate that approximately 95% of the scores will fall in the range of 20.875-(2*7.0799) to 20.875+(2*7.0799) or between 6.7152 and 35.0348. This kind of information is a critical stepping stone to enabling us to compare the performance of an individual on one variable with their performance on another, even when the variables are measured on entirely different scales.
CommonCrawl
Home » Theses (x) » English (x) » Electronic Theses & Dissertations (x) » Doctoral (x) ×English Methodology towards accessing small molecule heterocycles for h20S and TB proteasome modulation Bethel, Travis Kordero "This dissertation focused on the development and advancement of methodology for accessing imidazoline scaffolds and other small heterocyclic molecules for biological evaluation. Past research within the Tepe group has correlated functionalized 2-imidazolines to proteasome modulation. Further diversification of the methodology for accessing these 2-imidazoline scaffolds, has allowed for the synthesis of a small library of analogs for SAR evaluation with the h20S proteasome. These finding were... Show more"This dissertation focused on the development and advancement of methodology for accessing imidazoline scaffolds and other small heterocyclic molecules for biological evaluation. Past research within the Tepe group has correlated functionalized 2-imidazolines to proteasome modulation. Further diversification of the methodology for accessing these 2-imidazoline scaffolds, has allowed for the synthesis of a small library of analogs for SAR evaluation with the h20S proteasome. These finding were used to further experimentally model and synthesize more efficacious 2-imidazoline derivates for proteasome modulation. The proteasome is responsible for the degradation of polyubiquitinated proteins in the cell, producing amino acids that can then be used for alternative cellular functions. The introducition of small heterocyclic molecules like 2- imidazolines, bind to the proteasome and lower is efficacy for protein digestion through modulation of its activity."--Page ii. Three Essays in the Economics of Education Kho, Kevin Chapter 1: School Cellphone Bans and Student Substance Abuse: Evidence From California Public High SchoolsFollowing high profile school shootings and the September 11th terrorist attacks, public concern over school emergency preparedness prompted the California State Legislature in 2003 to overturn a statewide ban against student possession of cellphones on campuses. After the repeal of the prohibition, which had been established in 1988 to curb drug dealing, school districts were allowed... Show moreChapter 1: School Cellphone Bans and Student Substance Abuse: Evidence From California Public High SchoolsFollowing high profile school shootings and the September 11th terrorist attacks, public concern over school emergency preparedness prompted the California State Legislature in 2003 to overturn a statewide ban against student possession of cellphones on campuses. After the repeal of the prohibition, which had been established in 1988 to curb drug dealing, school districts were allowed individually to either continue banning phones or modify their device policies; most opted over time to accommodate usage during certain hours of the day. Using fixed effects regression analysis clustered at the district level, I exploit variation in the timing of district policies to estimate the impact on substance abuse from lifting school cellphone bans. Results provide evidence that allowing students to use cellphones at school increases opportunities to obtain and abuse controlled substances; this effect is particularly pronounced in the incidence of marijuana smoking among 9th graders, who exhibit a 1.3 percentage point higher chance of reporting past-month marijuana use in the year a ban is lifted.Factors involved may include the capability that the technology provides to negotiate high risk interactions in private and to seek out and contact a relatively small number of drug suppliers; as is thus to be expected, no impact is found on the consumption of cigarettes, which can be obtained legally by a large proportion of high schoolers.Chapter 2: Impact of Internet Access on Student Learning in Peruvian Schools (with Leah Lakdawala and Eduardo Nakasone)We investigate the impacts of school-based internet access on pupil achievement in Peru, using a large panel of 5,903 public primary schools that gained internet connections during 2007-2014. We employ an event study approach and a trend break analysis that exploit variation in the timing of internet roll-out up to 5 years after installation. We find that internet access has a moderate, positive short-run impact on school-average standardized math scores, but importantly that this effect grows over time. We provide evidence that schools require time to adapt to internet access by hiring teachers with computer training and that this process is not immediate. These dynamics highlight the need for complementary investments to fully exploit new technological inputs and underscores the importance of using an extended evaluation window to allow the effects of school-based internet on learning to materialize.Chapter 3: Discretionary School Discipline Policies and Demographic DisparitiesIn 2014, California passed the law AB 420, becoming the first state to limit the use of school suspensions and expulsions as punishment for "willful defiance" - a subjectively determined offense thought by state lawmakers to lead to racial disparities in discipline. In this paper, I overview the state's recent (from 2012-2017) progress in reducing exclusionary discipline and note effects on disproportionality, here characterized as the difference between a given group's proportion of discipline and its proportion of enrollment. Using identification by treatment intensity, based on schools' pre AB 420 proportion of discipline attributable to willful defiance, I also attempt to gauge the effectiveness of reducing punishment of defiance in mitigating disproportionality. School level administrative data from elementary schools (spanning kindergarten through 5th grade) indicate that exclusionary discipline has considerably declined throughout the period. On the other hand, it does not appear that AB 420, along with lower willful defiance related discipline, has reduced disproportionality. Constellating cultural rhetorics, first year writing, and service-learning : a story of teaching and learning Prielipp, Sarah E. This dissertation examines the relationships among cultural rhetorics theory and methods, first year writing, and service-learning by showing the ways these theories and pedagogies constellate, or build, new things from their intersections and relationality. The author argues that "story is theory is practice" and demonstrates how this can work in first year writing through a cultural rhetorics-informed service-learning pedagogy. The author explains that this story of teaching and learning –... Show moreThis dissertation examines the relationships among cultural rhetorics theory and methods, first year writing, and service-learning by showing the ways these theories and pedagogies constellate, or build, new things from their intersections and relationality. The author argues that "story is theory is practice" and demonstrates how this can work in first year writing through a cultural rhetorics-informed service-learning pedagogy. The author explains that this story of teaching and learning – both hers and her students – builds theory through sharing their stories of practice in their writing classroom. This theory/story/practice shows us how relationality, accountability, and reciprocity help develop habits of mind that may transfer to other situations to become active, engaged citizens for social justice.Chapter one develops Wilson's Indigenous research paradigm as a theoretical framework for the author's teaching and research by explaining her research paradigm for this project and discussing the literature that she draws on throughout this project. Chapter two further explains how she defines and uses service-learning by providing two case studies from the FYW courses she taught at Michigan State University in the 2016-2017 academic year. Chapter three begins to constellate cultural rhetorics theory and methods, first year writing, and service-learning using Wilson's Indigenous research paradigm as a framework. The "half" chapters are her students' voices, their stories in their words; these student selections help to show how they are practicing habits of mind throughout the course in their writing. Controlling the surface processes of X- and Z-type ligands to tailor the photophysics of II-VI semiconductor nanocrystals Saniepay, Mersedeh II−VI colloidal semiconductor nanocrystals (NCs), such as CdSe NCs, are often plagued by efficient nonradiative recombination processes that severely limit their use in energy-conversion schemes. While these processes are now well-known to occur at the surface, a full understanding of the exact nature of surface defects and of their role in deactivating the excited states of NCs has yet to be established, which is partly due to challenges associated with the direct probing of the complex and... Show moreII−VI colloidal semiconductor nanocrystals (NCs), such as CdSe NCs, are often plagued by efficient nonradiative recombination processes that severely limit their use in energy-conversion schemes. While these processes are now well-known to occur at the surface, a full understanding of the exact nature of surface defects and of their role in deactivating the excited states of NCs has yet to be established, which is partly due to challenges associated with the direct probing of the complex and dynamic surface of colloidal NCs. In this dissertation, we report a detailed study of the surface of cadmium-rich zinc-blende CdSe NCs. The surfaces of these cadmium-richspecies are characterized by the presence of cadmium carboxylate complexes (CdX2) that act as Lewis acid (Z- type) ligands that passivate under-coordinated selenide surface species. The systematic displacement of CdX2 from the surface by N,N,N′,N′-tetramethylethylene-1,2-diamine (TMEDA) has been studied using a combination of 1H NMR and photoluminescence spectroscopies. We demonstrate the existence of two independent surface sites that differ strikingly in the binding affinity for CdX2 and that are under dynamic equilibrium with each other. A model involving coupled dual equilibria allows a full characterization of the thermodynamics of surface binding (free energy, as well as enthalpic and entropic terms), showing that entropic contributions are responsible for the difference between the two surface sites. Importantly, we demonstrate that cadmium vacancies only lead to important photoluminescence quenching when created on one of the two sites, allowing a complete picture of the surface composition to be drawn where each site is assigned to specific NC facet locale, with CdX2 binding affinity and nonradiative recombinationefficiencies that differ by up to two orders of magnitude.To understand the effect of steric hindrance and types of functional groups in different ligands on X-type ligand exchanges, using NMR, PL and UV-Vis absorption spectroscopy, we studied X-type exchanges on CdSe NCs capped with native carboxylates, with oleic acid, oleyl thiol, benzoic acid and benzenethiol ligands. We discussed the results and occurrence of undesired pathways including displacement of Z-type ligands, and suggested ligand exchange strategies that most likely lead to 100% X-type exchange.The structural complexity of surface of CdS NCs is also discussed in this dissertation. We demonstrate presence of two different sulfur surface defects on CdS NCs with ligand binding equilibrium constants that are two orders of magnitude apart and 20-60% smaller than those of selenium on similar size CdSe NCs. We also correlated the different surface defects to the PL quenching efficiency of CdS NCs. A search for resonant Z' production in high-mass dielectron final states with the ATLAS detector in Run-2 of the Large Hadron Collider Willis, Christopher G. A search is performed for new resonant high-mass phenomena in the dielectron final state. The search uses 36.1 $\mathrm{fb}^{-1}$ of proton-proton collision data, collected at $\sqrt{s} = 13$ TeV by the ATLAS experiment at the Large Hadron Collider during its 2015 and 2016 data-taking runs. The dielectron invariant mass is used as the search variable. No significant deviations from the Standard Model prediction are observed. Upper limits at the 95\% credibility level are set on the cross... Show moreA search is performed for new resonant high-mass phenomena in the dielectron final state. The search uses 36.1 $\mathrm{fb}^{-1}$ of proton-proton collision data, collected at $\sqrt{s} = 13$ TeV by the ATLAS experiment at the Large Hadron Collider during its 2015 and 2016 data-taking runs. The dielectron invariant mass is used as the search variable. No significant deviations from the Standard Model prediction are observed. Upper limits at the 95\% credibility level are set on the cross section times branching fraction to dielectron pairs for resonant $Z^{\prime}$ models considered in the search. Lower limits on the resonance pole mass are also presented. For the $Z^{\prime}_{\mathrm{SSM}}$, masses are excluded up to 4.5 TeV, while masses up to 4.1 TeV are excluded in the $E_{6}$-motivated $Z^{\prime}_{\chi}$ model. Limits are also derived in the Minimal $Z^{\prime}$ Model on the relative coupling strength $\gamma^{\prime}$. In addition, a series of studies are conducted in order to assess and reduce the dominant systematic uncertainty of this analysis, which arises from the imprecise knowledge of the Parton Distribution Functions in regions of very high parton $x$. While this uncertainty does not limit the discovery potential of the analysis presented here, it has the potential to do so in future searches. A novel approach is developed, and is shown to significantly reduce this systematic uncertainty in the high-mass search region of interest, thereby improving the discovery potential of future analyses. "Flooding oil" : investigating poor health in vulnerable communities in the Niger Delta Region of Nigeria Barry, Fatoumata Binta The Niger Delta region in Nigeria has been exploited for decades due to extensive oil and gas deposits that have led to devastating livelihood and health consequences. In addition to oil and gas industry impacts, floods are intensifying in Niger Delta communities that have annual flooding during the rainy season (April to October). In 2012, Nigeria experienced a severe flooding event that damaged infrastructure and livelihoods with virtually no studies completed about the health consequences.... Show moreThe Niger Delta region in Nigeria has been exploited for decades due to extensive oil and gas deposits that have led to devastating livelihood and health consequences. In addition to oil and gas industry impacts, floods are intensifying in Niger Delta communities that have annual flooding during the rainy season (April to October). In 2012, Nigeria experienced a severe flooding event that damaged infrastructure and livelihoods with virtually no studies completed about the health consequences. This dissertation research study aims to fill this scholarly gap by disentangling the emerging health concerns in Niger Delta oil communities with particular attention to women and children as they are sensitive indicators of population health. It utilizes a mixed-methods approach with the inclusion of Eco-Syndemics and African womanism theoretical perspectives. It was found that the Niger Delta has multiple pre-existing vulnerabilities that put the population at more risk during flooding events. Also, through an evaluation of airborne concentrations of chemicals released by gas flares and a retrospective, cross-sectional comparison, women and children in Uzere (oil community) have greater exposure levels to toxic chemicals released and more health concerns than similar women and children in Aviara (non-oil community), even though both communities are located in flood-prone areas in the Niger Delta. Overall, this dissertation research advances our understanding of the complexity of health hazards in communities close to oil and gas activities in the midst of more severe flooding. It also enriches scholarly and policy debates by providing an initial assessment of the link between climate variability and health in vulnerable communities. -- Abstract. The role of fetuin-A on adipose tissue lipid mobilization in dairy cows Strieder-Barboza, Clarissa Adipose tissue (AT) is a major modulator of metabolic functions by regulating energy storage and acting as an endocrine organ. In periparturient dairy cows, increased AT mobilization of free fatty acids (FFA) is one a major adaptive mechanism to cope with higher energy demand for rapid fetal growth and the onset of lactation. As lactation progresses, lipolysis rates decrease, and lipogenesis replenishes triacylglycerol (TAG) stores in adipocytes. However, dysregulated metabolic responses,... Show moreAdipose tissue (AT) is a major modulator of metabolic functions by regulating energy storage and acting as an endocrine organ. In periparturient dairy cows, increased AT mobilization of free fatty acids (FFA) is one a major adaptive mechanism to cope with higher energy demand for rapid fetal growth and the onset of lactation. As lactation progresses, lipolysis rates decrease, and lipogenesis replenishes triacylglycerol (TAG) stores in adipocytes. However, dysregulated metabolic responses, characterized by altered AT sensitivity to hormonal and endocrine changes around parturition, lead to a massive release of FFA into circulation and an increased susceptibility of cows to disease. These maladaptive responses are underlined by an altered secretory pattern of adipokines and a marked unbalance in lipolysis and lipogenesis rates, favoring TAG breakdown in adipocytes. Thus, identifying adipokines that modulate AT function in periparturient dairy cows can facilitate the development of novel management, nutritional, or pharmaceutical interventions to reduce disease incidence. Fetuin-A (FetA; alpha-2-Heremans-Schmid glycoprotein, AHSG) is an adipokine that functions as a carrier of FFA in plasma and is associated with insulin-mediated inhibition of lipolysis and stimulation of lipogenesis in humans. FetA increases the incorporation of fatty acids (FA) into intracellular lipids and enhances cellular TAG in human cells. However, the mechanisms by which FetA induces TAG synthesis are not defined. FetA has also anti-inflammatory properties by inhibiting the production of pro-inflammatory cytokines and acting as a negative acute-phase protein (APP) in acute inflammation. These findings suggest that FetA may also be involved in lipid mobilization and inflammation in AT of dairy cows. In our first in vivo study with periparturient dairy cows, we observed that serum and AT FetA expression decreased at the onset of lactation when lipogenesis was downregulated and plasma FFA was increased. FetA expression dynamics in AT were analogous to the patterns of lipogenic markers suggesting its link with lipid mobilization in AT of dairy cows. We also demonstrated that FetA is negative-APP locally in AT of dairy cows. These results suggest that FetA could support physiological adaptations to NEB in AT of periparturient dairy cows. To explore the potential roles of FetA on AT lipid mobilization of dairy cows, we developed an in vitro model for culturing bovine adipocytes that closely mimics the in vivo AT environment. For the first time, we reported an abundant expression and secretion of FetA by primary bovine adipocytes, thus suggesting a potential autocrine effect of FetA in AT of dairy cows. We observed that FetA attenuates lipolytic responses and enhances both, FA uptake and TAG accumulation in bovine adipocytes. Our results reveal that the upregulation of the expression and activity of 1-acylglycerol-3-phosphate acyltransferase (AGAPT2), a rate limiting lipogenic enzyme for TAG synthesis, may be a potential mechanism by which FetA enhances lipogenic function of bovine adipocytes. Overall, our results indicate that FetA is a lipogenic adipokine with anti-inflammatory function in the AT of dairy cows. Our findings provide evidence that FetA could buffer increased plasma FFA during negative energy balance by stimulating AGAPT2 activity and the use of excess FFA for TAG synthesis in AT of dairy cows. The genetic selection of cows by variations of the FetA coding gene associated with its anti-lipolytic and pro-lipogenic functions (already known in humans), the identification of dietary supplements (i.e. FA) that enhance FetA function, as well as the parenteral use of FetA to stimulate AGAPT2 activity, could serve as potential strategies to be tested and implemented in dairy cows. Iridium catalyzed C-H activation borylations of fluorine bearing arenes and related studies Jayasundara, Chathurika Ruwanthi Kumarihami During the last two decades, iridium catalyzed aromatic borylation has emerged as one of the most convenient methodologies for functionalizing arenes and heteroarenes. The regioselectivity of Ir-catalyzed borylations are typically governed by sterics, therefore it complements the regioselectivity found in electrophilic aromatic substitution or directed ortho metalation. This unique regioselectivity and broad functional group tolerance (ester, amide, halogen, etc.) allows for synthesis of... Show moreDuring the last two decades, iridium catalyzed aromatic borylation has emerged as one of the most convenient methodologies for functionalizing arenes and heteroarenes. The regioselectivity of Ir-catalyzed borylations are typically governed by sterics, therefore it complements the regioselectivity found in electrophilic aromatic substitution or directed ortho metalation. This unique regioselectivity and broad functional group tolerance (ester, amide, halogen, etc.) allows for synthesis of novel synthetic intermediates, many of which were previously either unknown or difficult to make. Since these reactions are mainly driven by sterics, it is possible to install boronic ester group (Bpin) next to small substituents like hydrogen, cyano, or fluorine. This feature is helpful but can also create challenges, specially in cases like borylation of fluoro arenes. These fluoro arenes tend give 1:1 mixture of steric (meta to fluorine) and electronic (ortho to fluorine) products. Therefore, to overcome this problem, we introduced a two-step Ir-catalyzed borylation/Pd-catalyzed dehalogenation sequence that allows one to synthesize fluoroarenes where the boronic ester is ortho to fluorine (electronic). Here, a halogen para to the fluorine is used as a sacrificial blocking group allowing the Ir-catalyzed borylation to favor the electronic product exclusively. Then the chemoselective Pd-catalyzed dehalogenation by KF activated polymethylhydrosiloxane (PMHS) is used to remove the halogen without compromising the Bpin group. Halosubstituted aryl boronates have the potential for orthogonal reactivity in cross-coupling reactions. We began exploring cross-coupling of triorganoindiums with these arylhalides bearing boronic esters in collaboration with Prof. P. Sestelo at University of da Coruña, Spain. We were able to synthesize borylated biaryls by merging Ir-catalyzed C–H borylations with Pd-catalyzed organoindium cross-couplings.As a part of the Dow–MSU-GOALI collaborations, we were able to synthesize a cobalt catalyst for C-H borylations of alkyl arenes and heteroarenes. This catalyst enables selective monoborylation of the benzylic position of alkyl arenes using pinacolborane (HBpin) as the boron source. In 2016, an internship opportunity led to the screening of ligands for C-H borylations at the Dow chemicals company in Midland, MI. From this internship opportunity, we discovered the first ligand controlled synthesis of 1,2-di and 1,2,3-tri borylated arenes. Also, I investigated a recyclable iridium heterogeneous catalyst for borylations during the internship. Finally, a bulky terphenyl incorporated bipyridine ligand is synthesized for selective iridium catalyzed para C–H borylations. A multidimensional treatment integrity assessment of parent coaching in a telehealth parent training program for autism spectrum disorder Tran, Shannon Quyen An important principle of evidenced-based practice (EBP) is using interventions with strong empirical support for their effectiveness, commonly known as evidence-based interventions (EBIs). Evidence of an intervention's effectiveness is strongest when supported by treatment integrity data. Treatment integrity refers to the degree to which an intervention is implemented as intended by the original design. The assessment's purpose is to provide researchers and practitioners with data about the... Show moreAn important principle of evidenced-based practice (EBP) is using interventions with strong empirical support for their effectiveness, commonly known as evidence-based interventions (EBIs). Evidence of an intervention's effectiveness is strongest when supported by treatment integrity data. Treatment integrity refers to the degree to which an intervention is implemented as intended by the original design. The assessment's purpose is to provide researchers and practitioners with data about the implementation process to enable valid conclusions to be drawn about an intervention's effectiveness.The present study focused on the treatment integrity assessment of Project ImPACT (Improving Parents as Communication Teachers; Ingersoll & Dvortcsak, 2010), a parent training program that aims to improve parents' competence in teaching social communication skills to children diagnosed with autism spectrum disorder (ASD). The parent coaching portion of the training program was the focus of this study. Treatment integrity assessment occurred at two stages: The coaching delivery and the treatment delivery.This study used videos of coaching sessions from two randomized controlled trial (RCT) studies that examined the effectiveness of delivering Project ImPACT via telehealth with and without parent coaching. Dane and Schneider's (1998) treatment integrity conceptual framework was used to guide the assessment. For the coaching delivery, the assessment focused on the therapists' adherence to the coaching procedure, provision of feedback, and quality of coaching delivery, and the parents' responsiveness during the coaching session. For the treatment delivery, the assessment focused on the parents' adherence to the intervention strategies and quality of the treatment delivery. Descriptive statistics provided a general overview of the therapists' coaching performance and the parents' teaching performance. Multilevel regression analysis determined which components of the coaching delivery best predicted how parents used the intervention techniques and structured the play session for their child during the coaching sessions.Overall, the therapists consistently completed the essential steps of the coaching process. They frequently provided comprehensive feedback, attention, and reassurance. They did not provide as many opportunities for the parents to engage in collaborative problem-solving or to reflect on their implementation progress. In turn, the parents fully participated in the coaching session and demonstrated sufficient capacity to implement the intervention techniques and structure a meaningful play session for their child.Results from a multilevel regression analysis indicated that none of the treatment integrity components of the coaching delivery significantly predicted the parents' treatment adherence. The quality of coaching delivery did, however, significantly predict the parents' structure of the play segment, albeit in a negative direction. The study's results, along with its limitations, provided a platform for continuing the conversation about treatment integrity assessment in intervention studies. In particular, the study concluded with new questions about the conceptualization and operationalization of different parent coaching aspects for parent-implemented interventions. Seeking to understand the concept and improve the measurement of these parent coaching aspects can lead to a more accurate identification of the active ingredients of parent coaching in ASD parent-implemented interventions. Fundamental studies and engineering modeling of hydrogen bonding Bala Ahmed, Aseel Mohamed Ahmed This project aims to enhance the engineering modeling of hydrogen bonding, or association, by blending ab initio quantum calculations, fundamental molecular level findings from experimental techniques, and thermodynamic models. Because of the ubiquity of hydrogen bonding, applications for an improved association model are extensive, ranging from drug design to plastics manufacturing. Therefore, a substantial amount of work has been aimed at improving traditional thermodynamic tools, which... Show moreThis project aims to enhance the engineering modeling of hydrogen bonding, or association, by blending ab initio quantum calculations, fundamental molecular level findings from experimental techniques, and thermodynamic models. Because of the ubiquity of hydrogen bonding, applications for an improved association model are extensive, ranging from drug design to plastics manufacturing. Therefore, a substantial amount of work has been aimed at improving traditional thermodynamic tools, which often fail to capture the behavior of associating systems accurately. To guide models, spectroscopic techniques have been leveraged to gain insight into the interactions between molecules in the liquid phase, but interpretation is difficult. Moreover, with the advancement of computational chemistry technology, molecular dynamics (MD) and quantum mechanical (QM) calculations have also been utilized to understand the characteristics of hydrogen bonded clusters. However, few studies have combined all 3 techniques (the thermodynamic model, spectroscopy and ab initio calculations) in a rigorous way. To this end, an activity coefficient model for association is developed using Wertheim's perturbation theory and its capabilities and limitations are explored with parameters from literature. Furthermore, a sequential MD and QM protocol is designed which facilitates the interpretation of the hydroxyl vibration in infrared spectroscopy and a method is developed to quantify the entire band. Finally, the methods are used to calculate the value of the association constant for an alcohol + alkane system. Vicarious interaction with politicians by identifying with surrogates on social media : a social identification mechanism based on multiple salient social categories Dai, Yue (College teacher) New media platforms display politicians' interactions with people from a variety of social categories. Previous research shows that observers could vicariously experience parasocial intimacy toward a public figure by identifying with a surrogate—an individual who directly interacts with the public figure and who is considered an ingroup member by the observer based on a salient social category (Dai & Walther, 2018). Developments in the social identity literature call for further examination... Show moreNew media platforms display politicians' interactions with people from a variety of social categories. Previous research shows that observers could vicariously experience parasocial intimacy toward a public figure by identifying with a surrogate—an individual who directly interacts with the public figure and who is considered an ingroup member by the observer based on a salient social category (Dai & Walther, 2018). Developments in the social identity literature call for further examination of this surrogacy effect in contexts where multiple social categories are activated as bases upon which observers identify with surrogates. Through two experiments involving a total sample of 1,068 participants, this research demonstrates that when a surrogate's identity is presented as different combinations of political affiliation (democratic or republican) and social status (ordinary voter or politician), the more categories observers share in common with the surrogate, the more they identify with the surrogate, and thereby experiences greater parasocial intimacy toward a politician who is seen replying to the surrogate on Twitter. These findings extend previous findings on a social identification-based mechanism of the surrogacy effect and inform online impression management practices of politician. Poly(ethylene glycol) tailored polymers : nanomicelles with tunable lower critical solution temperature behavior Lien, Yu-Ling Propargyl and 1,1-dimethyl propargyl substituted poly(ethyleneoxides) (propargyl substituted = poly(PGE), 1,1ʹ-dimethyl propargyl substituted = poly(MGE)) have been prepared by ring-opening polymerization of epoxides, which were synthesized from epichlorohydrin and propargyl or 1,1-dimethyl propargyl alcohol via Williamson ether synthesis. The resulting polymers were modified by Cu-catalyzed azide alkyne cycloaddition (CuAAC) of the polymer propargyl groups and organic azides. When these... Show morePropargyl and 1,1-dimethyl propargyl substituted poly(ethyleneoxides) (propargyl substituted = poly(PGE), 1,1ʹ-dimethyl propargyl substituted = poly(MGE)) have been prepared by ring-opening polymerization of epoxides, which were synthesized from epichlorohydrin and propargyl or 1,1-dimethyl propargyl alcohol via Williamson ether synthesis. The resulting polymers were modified by Cu-catalyzed azide alkyne cycloaddition (CuAAC) of the polymer propargyl groups and organic azides. When these reactions were carried out with mixtures of azides, the ratios of azides incorporated in the polymer side chains were equal to the molar ratios of the organic azides reactants (± 2%). Mixtures of hydrophobic (decyl azide) and hydrophilic (mDEG azide) azides result in amphiphilic polymers that exhibited a lower critical solution temperature (LCST) behavior. The polymer LCSTs scaled from 48 to 97 ± 2 °C (poly(PGE) derived amphiphiles) and 4 to 46 ± 1 °C (poly(MGE) derived amphiphiles) in a roughly linear fashion with the mole fraction of hydrophilic side chains in the polymer. When charged azides, COOH azide and aminium azide, were used, the physical property as well as the LCST behavior oh the polymers were changed. The LCSTs of polymers incorporating charged azides were increased and the LCSTs were decreased by adding salts in the solutions. The hydrodynamic radii (RH) obtained from DLS measurements indicate that polymers form unimolecular micelles in water (Mn = 52,000 g/mol, PDI = 1.19, RH = 6 ± 2 nm), and TEM data showed monodisperse domains (20 ± 4 nm, for Mn = 52,000) when water was evaporated at room temperature from solutions cast on TEM grids. This length scale is consistent with domains that consist of single polymer chains. When the TEM grid was heated during evaporation, the domain size increased to 74 ± 45 nm. In solution, the unimolecular micelles can solubilize hydrophobic small molecules, such as trans-azobenzene (trans-PhN=NPh) in water. DLS data suggested that polymer encapsulating trans-PhN=NPh (trans-PhN=NPh@poly(PGE) or poly(MGE)) derived amphiphiles) showed signs of aggregation in one case (RH = 12 ± 8 nm) and no signs of aggregation in another case (RH = 5 ± 2 nm). When the resulting solutions were raised above the polymer LCST the polymer and small molecule precipitated. When the mixture was cooled below the LCST, the polymer and hydrophobic small molecule re-dissolved. The unimolecular micelles were used to encapsulate a hydrophilic macromolecule, Subtilisin Carlsberg (SC), in aqueous solution and organic media. Poly(PGE) or poly(MGE) derived amphiphiles with COOH pendant group slowed down SC aggregation in aqueous environment. Also, the activity of SC@poly(MGE) derived amphiphiles with COOH pendant group was assayed and the half-life of SC was increased to 10 h from 2 h at 50 °C. Initial studies of SC@poly(PGE) or poly(MGE) derived amphiphiles in organic media showed enzymatic activity in toluene after 16 h at 37 °C. Pacific Standard Time : modernism and the making of West Coast jazz Spencer, Michael Thomas An interdisciplinary study of one of the most overlooked and understudied movements in the history of jazz, this dissertation draws from the fields of New Jazz Studies, Popular Culture Studies, and Art History in order to reconstruct the cultural history of West Coast jazz. Focusing on the critical texts and institutions that allowed this movement to germinate and expand, I explore the ways in which the music was represented through various types of media: on record, on radio, on screen, in... Show moreAn interdisciplinary study of one of the most overlooked and understudied movements in the history of jazz, this dissertation draws from the fields of New Jazz Studies, Popular Culture Studies, and Art History in order to reconstruct the cultural history of West Coast jazz. Focusing on the critical texts and institutions that allowed this movement to germinate and expand, I explore the ways in which the music was represented through various types of media: on record, on radio, on screen, in concert, and in print (i.e., record labels, radio stations, jazz periodicals, etc.). As a result, this study recontextualizes the West Coast jazz movement within the milieu of California modernism around the middle 20th century as a way to observe the broader jazz community; one which included musicians as well as photographers, painters, architects, sculptors, filmmakers, and other modernists. Matter and energy transformation : an investigation into secondary school students' arguments Onyancha, Kennedy M. Toward the development of a chemo-enzymatic process for the production of next-generation taxol analogs Ondari, Mark Evans Two thousand years of foraging ecology in the endangered Hawaiian petrel : insights from stable isotope analysis Wiley, Anne E. Recent evidence indicates that over the last 150 years, humans may have impacted seabird populations through modification of their marine food resources. Unfortunately, the high mobility and large pelagic ranges of many seabirds has resulted in a dearth of information concerning even their basic feeding habits. Here, I use stable isotope analysis to investigate the modern and ancient foraging ecology of an endangered seabird, the Hawaiian petrel (Pterodroma sandwichensis).... Show moreRecent evidence indicates that over the last 150 years, humans may have impacted seabird populations through modification of their marine food resources. Unfortunately, the high mobility and large pelagic ranges of many seabirds has resulted in a dearth of information concerning even their basic feeding habits. Here, I use stable isotope analysis to investigate the modern and ancient foraging ecology of an endangered seabird, the Hawaiian petrel (Pterodroma sandwichensis). Stable isotopic composition of Hawaiian petrel tissues (δ13C and δ15N values) reflects trophic level and foraging location and can therefore be used to describe patterns of foraging segregation or long-term temporal variation within the species. Chapter 1 investigates isotopic variation within individual flight feathers, with the goal of designing minimally-invasive and ecologically informative sampling strategies. δ13C values increased from tip to base in all 52 feathers within the study, including 42 remiges from the Hawaiian petrel and 10 from the Newell's Shearwater (Puffinus auricularis newelli). Such a consistent trend, observable among different species and age classes, is unlikely to result from shifts in diet or foraging location during feather synthesis. Considerable variation of δ15N values was also present within feathers (average range of 1.3 / within Hawaiian petrel remiges). A sampling protocol is proposed that requires only 1.0 mg of feather and minimal preparation time. Because it leaves the feather nearly intact, this protocol will likely facilitate obtaining isotope values from remiges of live birds and museum specimens. Chapter 2 explores ecological variability among modern Hawaiian petrel populations. δ13C and δ 15N values of feathers demonstrate segregation in foraging location during both the breeding and non-breeding seasons for petrels nesting on Kauai and Hawaii. Genetic analyses based on the mitochondrial Cytochrome b gene also reveal strong differentiation: coalescent-based analyses estimate < 1 migration event per 1,000 generations. Finally, feathers from multiple age groups and islands show unexpected divergences in δD that cannot be related to variation in source water. Overall, these data demonstrate foraging and genetic divergence between proximately nesting seabird populations. This divergence occurs despite high species mobility and a lack of physical barriers between nesting sites.Chapter 3 investigates Hawaiian petrel foraging habits and inter-colony segregation over the course of approximately 2,000 years. The most pervasive temporal trend is a 1.4-2.6 / decrease in average δ15N values, which likely reflects declining trophic level over the past 300-1,000 years. Isotopic chronologies also document ca. 2,000 years of foraging segregation between Hawaiian petrel colonies, observed as a long-standing divergence in average δ15N values. The degree of foraging segregation between petrel colonies diminishes through time and correlates well with genetic population structure. Shifting foraging habits of the Hawaiian petrel may reflect relatively widespread trophic alterations in the pelagic realm of the North Pacific. Such changes in foraging are concerning, given their implications for reproductive success and genetic diversity. Rater effects in ITA testing : ESL teachers' versus American undergraduates' judgments of accentedness, comprehensibility, and oral proficiency Hsieh, Ching-Ni Second language (L2) oral performance assessment always involves raters' subjective judgments and is thus subject to rater variability. The variability due to rater characteristics has important consequential impacts on decision-making processes, particularly in high-stakes testing situations (Bachman, Lynch, & Mason, 1995; A. Brown, 1995; Engelhard & Myford, 2003; Lumley & McNamara, 1995; McNamara, 1996). The purposes of this dissertation study were twofold. First, I wanted to examine rater... Show moreSecond language (L2) oral performance assessment always involves raters' subjective judgments and is thus subject to rater variability. The variability due to rater characteristics has important consequential impacts on decision-making processes, particularly in high-stakes testing situations (Bachman, Lynch, & Mason, 1995; A. Brown, 1995; Engelhard & Myford, 2003; Lumley & McNamara, 1995; McNamara, 1996). The purposes of this dissertation study were twofold. First, I wanted to examine rater severity effects across two groups of raters, English-as-a-Second-Language (ESL) teachers and American undergraduate students, when raters evaluated international teaching assistants' (ITAs) oral proficiency, accentedness, and comprehensibility. Second, I wanted to identify and compare rater orientations, that is, factors that drew raters' attention when judging the examinees' oral performances. I employed both quantitative and qualitative methodologies to address these issues concerning rater effects and rater orientations in the performance testing of ITAs at a large Midwestern university. Thirteen ESL teachers and 32 American undergraduate students participated in this study. They evaluated 28 potential ITAs' oral responses to the Speaking Proficiency English Assessment Kit (SPEAK). Raters evaluated the examinees' oral proficiency, accentedness, and comprehensibility, using three separate holistic rating scales. Raters also provided concurrent written comments regarding their rating criteria and participated in one-on-one interviews that explored raters' rating orientations. I employed a many-facet Rasch measurement analysis to examine and compare rater severity across rater groups using the computer program FACETS. I compared the written comments across groups to identify major rating criteria employed by the ESL teachers and the undergraduates. I analyzed the interview data to explore the reasons for rating discrepancies across groups. Results of the study suggested that the ESL teachers and the undergraduate raters did not differ in severity with respect to their ratings of oral proficiency. However, the comparisons of ratings in accentedness and comprehensibility were both statistically significant. The undergraduate raters were harsher than the teacher raters in their evaluations of examinees' accentedness and comprehensibility. Additionally, the analysis of the written comments identified six major rating criteria: linguistic resources, phonology, fluency, content, global assessment, and nonlinguistic factors. Cross-group comparisons of the rating criteria indicated that the undergraduate raters tended to evaluate the examinees' oral performances more globally than the ESL teachers did. In contrast, the ESL teachers tended to use a wider variety of rating criteria and commented more frequently on specific linguistic features. The interview protocols revealed that raters' experience with accented speech, perceptions of accent as an important rating criterion, and approaches to rating (i.e. analytical or global), had important bearings on raters' judgments of ITA speech. Hydraulic evaluation of lysimeters versus actual evapotranspirative caps Mijares, Ramil Garcia The ability to quantify percolation through a soil profile is one of the important considerations for geoenvironmental systems. Reliable estimates of percolation through natural soil deposits help in determining local groundwater recharge rates. For landfills, accurate measurement of percolation through the cap is necessary for permitting earthen final covers. Even though percolation is generally the smallest component among water balance parameters, quantifying its magnitude is... Show moreThe ability to quantify percolation through a soil profile is one of the important considerations for geoenvironmental systems. Reliable estimates of percolation through natural soil deposits help in determining local groundwater recharge rates. For landfills, accurate measurement of percolation through the cap is necessary for permitting earthen final covers. Even though percolation is generally the smallest component among water balance parameters, quantifying its magnitude is environmentally critical and key in evaluating the overall hydraulic performance of final covers. Direct estimation of percolation through a soil cover is typically achieved using pan lysimeters which consist of a drainage layer underlain by an impermeable geomembrane liner. The presence of this hydraulic barrier in lysimeter, which is used to facilitate the collection and measurement of percolation, alters the hydraulics of the system. This dissertation aimed to evaluate the difference in hydraulic performance of a lysimeter versus actual earthen cap with underlying landfilled waste. Two uncompacted and one compacted field-scale earthen cap test sections were built and instrumented at a landfill near Detroit, Michigan to investigate the hydraulic difference between an actual cap (underlain by waste) and corresponding lysimeter which was used to directly measure percolation. Lysimeter pans were installed in the middle of each test sections and the instrumented area was expanded upslope and downslope of the lysimeter to monitor the soil water storages within and beyond the lysimeter footprint. About 35 sensors were installed in each of the test sections to monitor water contents, water potentials, soil temperatures, water levels, and gas pressures. The field results show soil water storage values for the uncompacted test sections that were underlain by waste were typically greater than those for the corresponding lysimeters. For the compacted test section, there was no significant difference between the soil water storage for the actual cap and the lysimeter. Using the single porosity numerical models UNSAT-H and Vadose/W, the field measured percolation in the lysimeter as well as the variation in soil water storages were predicted with an acceptable accuracy for the compacted test section. The presence of macropore flow through large clods in uncompacted test sections is not accounted for in these single porosity models. A numerical analysis showed that when a lysimeter underestimates the soil water storage of an actual earthen cap, it corresponds to greater actual percolation across the interface between the soil cover and the underlying waste. A lysimeter overestimates percolation because the infiltrated water drained into the lysimeter is immediately removed and is therefore not available for removal by evapotranspiration. Field-scale simulations also showed that the magnitude of capillary barrier effect introduced by the drainage layer in the lysimeters is negligible when the saturated hydraulic conductivity of the soil cover is equal to or less than 10^-5 cm/s. Predicting differential item functioning in cross-lingual testing : the case of a high stakes test in the Kyrgyz Republic Drummond, Todd W. Cross-lingual tests are assessment instruments created in one language and adapted for use with another language group. Practitioners and researchers use cross-lingual tests for various descriptive, analytical and selection purposes both in comparative studies across nations and within countries marked by linguistic diversity (Hambleton, 2005). Due to cultural, contextual, psychological and linguistic differences between diverse populations, adapting test items for use across groups is a... Show moreCross-lingual tests are assessment instruments created in one language and adapted for use with another language group. Practitioners and researchers use cross-lingual tests for various descriptive, analytical and selection purposes both in comparative studies across nations and within countries marked by linguistic diversity (Hambleton, 2005). Due to cultural, contextual, psychological and linguistic differences between diverse populations, adapting test items for use across groups is a challenging endeavor. Of paramount importance in the test adaptation process is the proven ability of test developers to adapt test items across groups in meaningful ways. One way investigators seek to understand the level of item equivalence on a cross-lingual assessment is to analyze items for differential item functioning, or DIF. DIF is present when examinees from different language groups do not have the same probability of responding correctly to a given item, after controlling for examinee ability (Camilli & Shephard, 1994). In order to detect and minimize DIF, test developers employ both statistical methods and substantive (judgmental) reviews of cross-lingual items. In the Kyrgyz Republic, item developers rely on substantive review of items by bi-lingual professionals. In situations where statistical DIF detection methods are not typically utilized, the accuracy of such professionals in discerning differences in content, meaning and difficulty between items is especially important. In this study, the accuracy of bi-linguals' predictions about whether differences between Kyrgyz and Russian language test items would lead to DIF was evaluated. The items came from a cross-lingual university scholarship test in the Kyrgyz Republic. Evaluators' predictions were compared to a statistical test of "no difference" in response patterns by group using the logistic regression (LR) DIF detection method (Swaminathan & Rogers, 1990). A small number of test items were estimated to have "practical statistical DIF." There was a modest, positive correlation between evaluators' predictions and statistical DIF levels. However, with the exception of one item type, sentence completion, evaluators were unable to predict which language group was favored by differences on a consistent basis. Plausible explanations for this finding as well as ways to improve the accuracy of substantive review are offered. Data was also collected to determine the primary sources of DIF in order to inform the test development and adaptation process in the republic. Most of the causes of DIF were attributed to highly contextual (within item) sources of difference related to overt adaptation problems. However, inherent language differences were also noted: Syntax issues with the sentence completion items made the adaptation of this item type from Russian into Kyrgyz problematic. Statistical and substantive data indicated that the reading comprehension items were less problematic to adapt than analogy and sentence completion items. I analyze these findings and interpret their implications to key stakeholders, provide recommendations for how to improve the process of adapting items from Russian into Kyrgyz and highlight cautions to interpreting the data collected in this study. Institutionalization of digital literacies in four-year Liberal Arts institutions Wendt, Mary Ellen Few in the field of Rhetoric and Writing debate digital literacy's value in higher level institutions today, yet while faculty in general echo this same value, the actual institutionalization of digital literacy--especially in liberal arts institutions--stands in question. This dissertation project, situated in the field of digital rhetoric and positioned theoretically with postmodern constructs, approaches research in digital literacy issues and "institutionalizing" digital literacy. I... Show moreFew in the field of Rhetoric and Writing debate digital literacy's value in higher level institutions today, yet while faculty in general echo this same value, the actual institutionalization of digital literacy--especially in liberal arts institutions--stands in question. This dissertation project, situated in the field of digital rhetoric and positioned theoretically with postmodern constructs, approaches research in digital literacy issues and "institutionalizing" digital literacy. I examine findings using activity theory and genre theory to construct a model of the Operational Life Cycle of the Institutionalization of Digital Literacy. This model of the Operational Life Cycle has several purposes: it visually can enable others to navigate the murky journey of institutionalization; it provides a clear framework for understanding the complexities of institutional work; and it demonstrates the possibility that any size school, even with limited funds, can institutionalize digital literacy. This kind of model illuminates two ideas: One, the power of the centrifugal and centripetal outcomes (genres) of the activities in the Life Cycle, which can perpetuate and speed along such institutionalization. Two, such institutionalization requires the participation of the institution at large, English departments more specifically, and faculty members as individuals. Without such participation, holes in the Life Cycle render institutionalization of digital literacy much more difficult a challenge.
CommonCrawl
On the anisotropic Orlicz spaces applied in the problems of continuum mechanics DCDS-S Home $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions October 2013, 6(5): 1277-1289. doi: 10.3934/dcdss.2013.6.1277 A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations Yoshikazu Giga 1, Graduate School of Mathematical Sciences, University of Tokyo, Komaba 3-8-1, Tokyo 153-8914 Received November 2011 Revised January 2012 Published March 2013 We construct a Poiseuille type flow which is a bounded entire solution of the nonstationary Navier-Stokes and the Stokes equations in a half space with non-slip boundary condition. Our result in particular implies that there is a nontrivial solution for the Liouville problem under the non-slip boundary condition. A review for cases of the whole space and a slip boundary condition is included. Keywords: Navier-Stokes equations, Liouville problem, non-slip boundary condition, Poiseuille type flow.. Mathematics Subject Classification: Primary: 35Q30; Secondary: 35B53, 76D0. Citation: Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277 K. Abe and Y. Giga, Analyticity of the Stokes semigroup in spaces of bounded functions,, Hokkaido University Preprint Series in Mathematics, 980 (2011). Google Scholar D. Chae, Liouville type of theorems for the Euler and the Navier-Stokes equations,, Adv. Math., 228 (2011), 2855. doi: 10.1016/j.aim.2011.07.020. Google Scholar D. Chae, On the Liouville type of theorems with weights for the Navier-Stokes equations and the Euler equations,, Differential Integral Equations, 25 (2012), 403. Google Scholar D. Chae, Note on the incompressible Euler and related equations on $\mathbfR^N$,, Chin. Ann. Math. Ser. B, 30 (2009), 513. doi: 10.1007/s11401-009-0107-4. Google Scholar C.-C. Chen, R. M. Strain, H.-T. Yau and T.-P. Tsai, Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations,, Int. Math. Res. Not. IMRN, 2008 (). doi: 10.1093/imrn/rnn016. Google Scholar C.-C. Chen, R. M. Strain, T.-P. Tsai and H.-T. Yau, Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations. II,, Comm. Partial Differential Equations, 34 (2009), 203. doi: 10.1080/03605300902793956. Google Scholar P. Constantin and C. Fefferman, Direction of vorticity and the problem of global regularity for the Navier-Stokes equations,, Indiana Univ. Math. J., 42 (1993), 775. doi: 10.1512/iumj.1993.42.42034. Google Scholar E. De Giorgi, "Frontiere Orientate di Misura Minima,", Seminario di Matematica della Scuola Normale Superiore di Pisa, (1961), 1960. Google Scholar B. Gidas and J. Spruck, A priori bounds for positive solutions of nonlinear elliptic equations,, Commun. Partial Differential Equations, 6 (1981), 883. doi: 10.1080/03605308108820196. Google Scholar M.-H. Giga, Y. Giga and J. Saal, "Nonlinear Partial Differential Equations - Asymptotic Behavior of Solutions and Self-Similar Solutions,", Progress in Nonlinear Differential Equations and Their Applications, 79 (2010). doi: 10.1007/978-0-8176-4651-6. Google Scholar Y. Giga, A bound for global solutions of semilinear heat equations,, Comm. Math. Phys., 103 (1986), 415. Google Scholar Y. Giga and R. V. Kohn, Characterizing blow-up using similarity variables,, Indiana Univ. Math. J., 36 (1987), 1. doi: 10.1512/iumj.1987.36.36001. Google Scholar Y. Giga and H. Miura, On vorticity directions near singularities for the Navier-Stokes flows with infinite energy,, Comm. Math. Phys., 303 (2011), 289. doi: 10.1007/s00220-011-1197-x. Google Scholar E. Giusti, "Minimal Surfaces and Functions of Bounded Variation,", Monograph in Mathematics, 80 (1984). Google Scholar R. Hamilton, The formation of singularities in the Ricci flow,, in, (1995), 7. Google Scholar P.-Y. Hsu and Y. Maekawa, On nonexistence for stationary solutions to the Navier-Stokes equations with a linear strain,, preprint, (2011). Google Scholar K. Kang, Unbounded normal derivative for the Stokes system near boundary,, Math. Annal., 331 (2005), 87. doi: 10.1007/s00208-004-0575-5. Google Scholar G. Koch, N. Nadirashvilli, G. Seregin and V. Svěrák, Liouville theorems for the Navier-Stokes equations and applications,, Acta Math., 203 (2009), 83. doi: 10.1007/s11511-009-0039-6. Google Scholar O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasilinear Equations of Parabolic Type,", Translations of Mathematical Monographs, (1968). Google Scholar Y. Maekawa, Solution formula for the vorticity equations in the half plane with application to high vorticity creation at zero viscosity limit,, preprint, (2011). Google Scholar T. Ohyama, Interior regularity of weak solutions to the time-dependent Navier-Stokes equation,, Proc. Japan Acad., 36 (1960), 273. Google Scholar P. Polácik, P. Quittner and P. Souplet, Singularity and decay estimates in superlinear problems via Liouville-type theorems. II. Parabolic equations,, Indiana Univ. Math. J., 56 (2007), 879. doi: 10.1512/iumj.2007.56.2911. Google Scholar M. H. Protter and H. F. Weinberger, "Maximum Principles in Differential Equations,", Prentice-Hall, (1967). Google Scholar P. Quittner and Ph. Souplet, "Superlinear Parabolic Problems. Blow-Up, Global Existence and Steady States,", Birkhäuser Advanced Texts: Basler Lehrbücher, (2007). Google Scholar G. Seregin and V. Šverák, On type I singularities of the local axi-symmetric solutions of the Navier-Stokes equations,, Comm. Partial Differential Equations, 34 (2009), 171. doi: 10.1080/03605300802683687. Google Scholar J. Serrin, On the interior regularity of weak solutions of the Navier-Stokes equations,, Arch. Rational Mech. Anal., 9 (1962), 187. Google Scholar M. Struwe, Geometric evolution problems,, in, 2 (1996), 257. Google Scholar Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 1045-1063. doi: 10.3934/dcdss.2014.7.1045 Hamid Bellout, Jiří Neustupa, Patrick Penel. On a $\nu$-continuous family of strong solutions to the Euler or Navier-Stokes equations with the Navier-Type boundary condition. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1353-1373. doi: 10.3934/dcds.2010.27.1353 Dong Li, Xinwei Yu. On some Liouville type theorems for the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4719-4733. doi: 10.3934/dcds.2014.34.4719 Linjie Xiong. Incompressible Limit of isentropic Navier-Stokes equations with Navier-slip boundary. Kinetic & Related Models, 2018, 11 (3) : 469-490. doi: 10.3934/krm.2018021 Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148 Hantaek Bae. Solvability of the free boundary value problem of the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 769-801. doi: 10.3934/dcds.2011.29.769 Dongho Chae, Shangkun Weng. Liouville type theorems for the steady axially symmetric Navier-Stokes and magnetohydrodynamic equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5267-5285. doi: 10.3934/dcds.2016031 Jie Liao, Xiao-Ping Wang. Stability of an efficient Navier-Stokes solver with Navier boundary condition. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 153-171. doi: 10.3934/dcdsb.2012.17.153 Jean-Pierre Raymond. Stokes and Navier-Stokes equations with a nonhomogeneous divergence condition. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1537-1564. doi: 10.3934/dcdsb.2010.14.1537 Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 783-798. doi: 10.3934/dcdsb.2010.13.783 Bum Ja Jin, Kyungkeun Kang. Caccioppoli type inequality for non-Newtonian Stokes system and a local energy inequality of non-Newtonian Navier-Stokes equations without pressure. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4815-4834. doi: 10.3934/dcds.2017207 Boris Muha, Zvonimir Tutek. Note on evolutionary free piston problem for Stokes equations with slip boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1629-1639. doi: 10.3934/cpaa.2014.13.1629 Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020234 Gung-Min Gie, Makram Hamouda, Roger Temam. Asymptotic analysis of the Navier-Stokes equations in a curved domain with a non-characteristic boundary. Networks & Heterogeneous Media, 2012, 7 (4) : 741-766. doi: 10.3934/nhm.2012.7.741 Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 219-250. doi: 10.3934/dcdsb.2007.7.219 Zilai Li, Zhenhua Guo. On free boundary problem for compressible navier-stokes equations with temperature-dependent heat conductivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3903-3919. doi: 10.3934/dcdsb.2017201 Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041 Yoshihiro Shibata. On the local wellposedness of free boundary problem for the Navier-Stokes equations in an exterior domain. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1681-1721. doi: 10.3934/cpaa.2018081 Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135 Hi Jun Choe, Do Wan Kim, Yongsik Kim. Meshfree method for the non-stationary incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 17-39. doi: 10.3934/dcdsb.2006.6.17 Yoshikazu Giga
CommonCrawl
In situ measurements of the volume scattering function with LISST-VSF and LISST-200X in extreme environments: evaluation of instrument calibration and validity Håkon Sandven, Arne S. Kristoffersen, Yi-Chun Chen, and Børge Hamre Håkon Sandven, Arne S. Kristoffersen, Yi-Chun Chen, and Børge Hamre* Department of Physics and Technology, University of Bergen, Allegaten 55, 5007 Bergen, Norway *Corresponding author: [email protected] H Sandven A Kristoffersen B Hamre •https://doi.org/10.1364/OE.411177 Håkon Sandven, Arne S. Kristoffersen, Yi-Chun Chen, and Børge Hamre, "In situ measurements of the volume scattering function with LISST-VSF and LISST-200X in extreme environments: evaluation of instrument calibration and validity," Opt. Express 28, 37373-37396 (2020) Calibration of the LISST-VSF to derive the volume scattering functions in clear waters (OE) Calibrated near-forward volume scattering function obtained from the LISST particle sizer (OE) Variability of relationship between the volume scattering function at 180° and the backscattering coefficient for aquatic particles (AO) Environmental Optics Forward scattering High power lasers In field scattering Multiple scattering Scattering measurement Scattering measurements Revised Manuscript: November 12, 2020 Manuscript Accepted: November 12, 2020 The LISST-VSF and LISST-200X are commercial instruments made available in recent years, enabling underwater measurements of the volume scattering function, which has not been routinely measured in situ due to lack of instrumentation and difficulty of measurement. Bench-top and in situ measurements have enabled absolute calibration of the instruments and evaluation of instrument validity ranges, even at environmental extremes such as the clear waters at the North Pole and turbid glacial meltwaters. Key considerations for instrument validity ranges are ring detector noise levels and multiple scattering. In addition, Schlieren effects can be significant in stratified waters. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Changes in the ocean ecosystems due to anthropogenic climate change necessitates an increased level of environmental monitoring of the ocean. While ocean color data from remote sensing provide observations with extensive temporal and spatial coverage, it is often difficult to acquire accurate quantitative measurements of ocean constituents such as phytoplankton and colored dissolved organic matter (CDOM) in productive coastal regions [1,2]. Improvements in the ocean color products require enhanced measurements of the inherent optical properties (IOPs). These properties describe the influence of the water medium on light propagation, and are independent of the light field. These are measured in situ, typically with an active source and detector. Such optical in situ measurements can obtain high-resolution information about the vertical structure of the water column, which are unretrievable from satellite observations. While the spectral absorption and attenuation coefficients have been routinely measured for several years, direct measurements of scattering properties are more sparse. Unlike absorption, scattering also has a directional variability, quantitatively described by the volume scattering function (VSF). Due to lack of measurements, the VSF is often represented by simplified parameterisations, and scattering errors are corrected by empirical formulas. Thus, high-accuracy routine VSF measurements, which can be provided by the instruments LISST-VSF and LISST-200X, would be an important development within ocean optics research. This includes input for radiative transfer models, calculations of suspended particle properties [3,4], and corrections on other IOP measurements [5]. The volume scattering function (VSF or $\beta$, used interchangeably) is a fundamental IOP that represents the ability of a medium to scatter light in a certain direction. It is mathematically formulated as (1)$$\beta(\theta)=\frac{\textrm{d} I(\theta)}{E \ \textrm{d} V} \ [\textrm{m}^{-1} \textrm{sr}^{-1}],$$ where $E$ is the irradiance of an incident unpolarized beam, $\textrm {d}I$ is the radiant intensity of the scattered light from the volume element $\textrm {d}V$ at an angle $\theta$ relative to the incident beam. Here, no azimuthal dependency is assumed, which is the case for media with randomly oriented scatterers. The more routinely measured scattering coefficient $b$ can be calculated from the VSF using (2)$$b=2\pi \int_{0}^{\pi} \beta(\theta) \sin{\theta} \textrm{d}\theta.$$ Moreover, the phase function $p(\theta )$, which is the normalized VSF, is defined by $p(\theta )=\beta (\theta )/b.$ While the scattering coefficient may vary with several orders of magnitude depending on the ocean constituents in the respective water mass, and wavelength to a lesser degree, the phase function tends to depend less on ocean constituents and wavelength in natural waters. Hence, the phase function has often been subject to simplified models in radiative transfer modelling. Other related quantities are the backscattering coefficient $b_b$, given by (3)$$b_b=2\pi \int_{\pi/2}^{\pi}\beta(\theta) \sin{\theta} \textrm{d}\theta,$$ and the asymmetry factor $g$, (4)$$g=\langle\cos{\theta}\rangle = \frac{2\pi}{b} \int_{0}^{\pi} \beta(\theta) \cos{\theta} \sin{\theta} \textrm{d}\theta.$$ Due to the $\cos {\theta }$-term, the asymmetry factor is more dependent on scattering in the far-forward and far-backward direction than the scattering and backscattering coefficients, which makes it challenging to measure accurately. It is often applied when assessing whether multiple scattering can be neglected or not. The VSF of a medium containing randomly distributed spheres of uniform size and homogeneous structure was fully solved by Gustav Mie as a solution of Maxwell's equations [6]. For monodispersed spheres, only particle concentration, the size parameter (particle diameter relative to wavelength), and the relative complex refractive index decide the VSF. The solution becomes much more complex for non-spherical particles (see work of Mishchenko, e.g. [7]), and non-homogeneous optical properties. This makes forward modelling of the VSF in natural waters challenging, and the inverse problem even more so. Among others, the Fournier-Forand model [8] and Zhang et al. [9] utilize assumptions about particle size distributions and compositions to approximate the VSF. While the refractive index influences scattering at all angles, the VSF depends strongly on the particle size distribution in the forward direction. Hence, the particle size distribution can be measured from small-angle scattering measurements using inversion methods. Known as laser diffraction, this technique forms the physical motivation behind a series of LISST-instruments (Laser In Situ Scattering and Transmissometry, produced by Sequoia Sci.), which are routinely used for sediment and oceanographic studies. The working principle is illustrated in Fig. 1. A laser beam of known power is transmitted through a sample chamber. The transmitted light is detected by a transmission detector, from which the attenuation coefficient can be calculated. The scattered light passes through a lens and onto a ring detector placed at the lens' focal length. Hence, all light scattered with a certain angle from the beam hits the same radius on the ring, and is detected by the silicon photodetector arcs covering logarithmically-spaced radii and consequently scattering angles. Agrawal [10] demonstrated how the scattering data from the LISST-100 ring detector could be used to compute the shape of the VSF, with the angular resolution covered by the ring detector arcs. Slade and Boss [11] used polystyrene beads to calibrate LISST-100 scattering data, yielding both the correct VSF shape and magnitude for angles 0.08-15$^{\circ }$. Later, multiple studies have utilized LISST instruments for VSF measurements [12–15]. In [16–18], the Schlieren effect on forward scattering and attenuation have been investigated. However, the LISST-200X, which is the most recent successor of LISST-100 and measures the VSF for angles 0.04-13$^{\circ }$ at 670 nm, has not previously been calibrated or used for VSF measurements to our knowledge. Fig. 1. The working principle of the ring detector used for LISST-200X. The detector plane is placed in the focal plane of a collimating lens, so that light scattered with angle $\theta$ will hit the detector plane at radius $r$. The LISST-VSF ring detector is similar, but with a 515 nm laser wavelength and a longer pathlength $L$. The LISST-VSF is a recently released instrument measuring the VSF from 0.09$^{\circ }$ to 150$^{\circ }$. Compared to the scattering and backscatter coefficients, as well as small-angle scattering, the large-angle VSF have been sparingly measured in situ. The primary reason is the technical difficulty of the measurements; due to the ratio of forward to backward scattered radiance there is a high demand on the dynamical range of the instrument. Tyler [19] and Petzold [20] were among the first attempts to measure the VSF, and the latter has emerged as the most widely cited set of VSF measurements. While the Petzold measurements are limited in geographical and environmental scope, they are of remarkable quality over a large angular range and are highly beneficial as benchmark figures. Modern studies have focused more on laboratory studies, but also includes some in situ measurements (see [21] for an overview). For the LISST-VSF, the dynamical range is covered by using the aforementioned ring detector up to 14.38$^{\circ }$, and using a rotating eyeball detector at larger angles. The laser power is decreased when the eyeball position is between 15$^{\circ }$ and 40$^{\circ }$ to accommodate for the large differences in the scattering signal. In addition to the VSF, the eyeball detector also yields data allowing computation of Mueller matrix components M$_{12}$ and M$_{22}$. There is still a limited number of published studies with in situ or bench-top results from the LISST-VSF. Slade et al. [22] contains the first published results with the LISST-VSF, with bench-top measurements of polystyrene and size-fractioned Arizona Test Dust. Here, the degree of polarization was measured in addition to the VSF. The instrument has been shown to agree well with two other prototype VSF instruments, I-VSF and POLVSM [23]. However, due to unfortunate instrument damage, only the ring detector measurements were usable in this study. In the article by Koestner et al. [24], measurements with polystyrene beads of diameters in the sub-micrometer range were used to show that a correction function, $\beta _p^{\textrm {corr}}(\theta )=\textrm {CF}(\theta )\times \beta _p^{\textrm {meas}}(\theta )$, can be used to validate and correct LISST measurements with scattering predicted from Mie theory. Values of the correction function varied in the range 1.7-2.2 in this study. Moreover, laboratory measurements were done on natural seawater samples from different marine environments around the Southern California coast. In addition, the degree of linear polarization was also thoroughly investigated in a similar fashion, showing the further potential of LISST-VSF measurements. This work was very recently expanded upon in [25], where relationships between the measured Mueller matrix components and marine particle properties were investigated. The LISST-VSF has also been used in some optical communication studies (e.g. [26,27]), and in Sahoo et al. [28], where measurements were done in situ at discrete depths in the Bay of Bengal. While earlier studies have used the default relative calibration, Hu et al. [29] offered a significant improvement with the implementation of an absolute calibration of the eyeball detector (see section below for details). This decouples the two detector measurements and enables VSF measurements in very clear waters, which was utilized in a study where the VSF fraction of particles smaller than 0.2 and 0.7 $\mu$m in clear ocean waters was measured, in bench-top mode by filtering water during a research cruise in the North Pacific Ocean [30]. In this work, we similarly present results from both submerged polystyrene and polymethacrylate beads and natural waters using the LISST-200X and LISST-VSF. Our approach to absolute calibration utilize larger beads and has a larger concentration range than most earlier studies, which to a greater extent indicates the validity ranges of the instruments. Polarization measurements with the LISST-VSF, the Mueller matrix components $M_{12}$ and $M_{22}$, have not been included in the study for brevity. The focus of the natural water measurements has been in situ data collection using profiling deployment. Fieldwork has been conducted in highly diverse environments, such as the Arctic Ocean and coastal waters of the Svalbard archipelago during the INTAROS-2018 and CAATEX-2019 cruises, and in various coastal waters in southwestern Norway. We evaluate the need for temperature and salinity corrections, compare the validity ranges of the two instruments, and assess the effect of Schlieren on measurements in stratified waters. Finally, we look into extrapolation of forward scattering to estimate the scattering coefficient, which could be another useful application for the LISST-200X in turbid waters. 2.1 Laboratory calibration measurements Spherical beads with microscopic, low-variance diameters, made by polystyrene or polymethacrylate, made it possible to perform absolute calibration or validation of scattering measurements. Knowing the bead size distribution, relative refractive index and concentration, Mie theory can be used to calculate the exact VSF of the plastic beads submerged in pure water. Consequently, measurements from the LISST instruments may be compared with accurate theoretical values. For the relative complex refractive index of polystyrene beads, we used values found in [31]. For polymethacrylate (PMMA) beads, values from [32] were used. Theoretical scattering was calculated using Gaussian particle size distributions with the specified size variations. Each VSF was converted to instrument-specific angular resolutions by finding the mean value within each angular bin, which corresponds to the assumption made by the instrument data processing. For transmission values within the instrument range, the VSF measurements have been corrected for volume concentration errors by re-scaling using the ratio of measured and theoretical attenuation, similar to the method used in [11] and [24], (5)$$\beta_{\textrm{corr}} (\theta) = \frac{c_{\textrm{Mie}}}{c_{\textrm{meas}}} \beta_{\textrm{meas}}(\theta).$$ The motivation for doing bench-top measurements using beads are different for each of the studied sensors. The LISST-VSF ring detectors have already been factory-calibrated for VSF measurements, so validation is the primary goal. The LISST-200X has not been directly calibrated for VSF measurements, meaning that absolute calibration is necessary. For the LISST-VSF eyeball detector, the current default data processing uses a relative calibration to calculate the VSF, where the VSF measured from two outer-most ring detectors (angles 12.32$^{\circ }$ and 14.38$^{\circ }$) are extrapolated to the first angle of the eyeball detector (15$^{\circ }$). The ratio of the extrapolated value and the uncalibrated eyeball detector value $P_{11}^{\textrm {uncal}}(15)$ is subsequently used as a scaling factor for calculating the VSF from $P_{11}^{\textrm {uncal}}(\theta )$. This method is highly sensitive to uncertainties in the ring scattering data. In addition, bead measurements spanning over a large concentration range allow an assessment of the validity of the VSF measurements; when does the linear relationship between particle concentration and VSF, or attenuation, break down? The topic of instrument validity ranges with respect to particle size and concentration is further discussed in section 3.1.1. 2.1.1 Overview of data processing The VSF is computed from scattering data output of the LISST instruments, digital counts, using factory-provided data processing procedures outlined here. The ring detector data processing has been treated in detail in [3,10]. The raw signal, denoted scat, is digital counts, from which ambient light has been rejected. This is corrected for instrumental artifacts using (6)$$\textrm{cscat} = \textrm{scat}/\tau - \textrm{zscat}.$$ Here, zscat are background scattering measurements made in pure water to account for intrinsic pure water scattering and optical losses in the instrument. By contrast to zscat and scat, the corrected scattering signal cscat is no longer an integer due to the division on $\tau$. The transmission ratio is $\tau = T/T_0$, where $T$ is the measured transmission (measured laser power $I_{\textrm {out}}/I_{\textrm {incident}}$) and $T_0$ is the measured transmission from the background measurement. LISST-200X cscat values are also divided by a concentration calibration factor (for particle size distribution calculations), yielding many orders of magnitude smaller values than LISST-VSF cscat values. This has no impact on VSF measurements. For the LISST-VSF eyeball detector, the transmission must be calculated from the attenuation $c$, $\tau = \exp (-cL)$, as the pathlength $L$ of the detected light beam varies with the eyeball angle. The ring cscat data is subsequently converted to the VSF using the expression (7)$$\beta_{i,p} (\theta) = \frac{P_{i,p}}{P_0} \cdot \frac{C_{i}}{2 \pi \phi( \cos{\theta_{i+1}}-\cos{\theta_{i}})L},$$ where $P_{i,p}$ is the cscat scattering data on ring $i$ counted from the centre, $P_0$ is the incident light, $\theta _{i}$ and $\theta _{i+1}$ is the inner and outer radius of each ring detector, and $\phi = 1/6$ denotes that each detector only covers 1/6 of a circle. Furthermore, $C_{i}$ represents constants for geometrical corrections such as vignetting. In addition, the sensitivity of the detectors needs to have correct values. The eyeball scattering data follow another processing procedure; four components of scattering data have been measured using combinations of source and detector polarizations. Each of these components are first corrected for ambient light by rapidly turning the laser on and off and subtracting the measured ambient light. Then the components are corrected for differences in transmission due to use of a half-wave plate, before the components are corrected for attenuation-loss and laser drift, and the background measurements (matched with the PMT-gain) are subtracted. At around $45^{\circ }$, there is a change in laser power. This is corrected for using a factory-provided calibration factor, and interpolating the data between $44^{\circ }$ and $51^{\circ }$. Moreover, a geometric correction is applied for a small misalignment between laser and eyeball viewing plane, as well as a relative gain correction for differences between the laser polarizations (we used the automatic $\alpha$-value). Finally, the components are used to compute the VSF and additional polarization components. The VSF is computed by first taking the average of the four corrected components, yielding $P_{11}^{\textrm {uncal}}(\theta )$, which is then scaled using the absolute or relative calibration. We refer to [24,25,29] for further details on the eyeball detector. 2.1.2 Experiment procedure Each laboratory experiment started with filling the factory-provided sample chambers of the instruments with ultrapure water (Milli-Q). In order to minimize uncertainties in the bead concentrations, care was taken to add an accurate amount of water: 18 mL for the LISST-200X and 1620 mL for the LISST-VSF sample chamber. After adding ultrapure water, at least one hour was allowed for bubbles and possible temperature differences to dissipate, before blank measurements were made. For the LISST-VSF, the sample chamber mixers always had to be used when measuring, in order to get non-fluctuating transmission values. Solutions of polystyrene or PMMA beads were then added to the sample chamber using pipettes (Eppendorf Research Plus), so that the bead concentration could be known with a high degree of certainty. For each experiment, a cumulative amount of the bead solution was added, yielding a measurement series of increasing bead concentration. For each bead concentration, approximately 100 single measurements were done with both instruments. 2.1.3 LISST-VSF eyeball detector calibration Polystyrene beads (0.190 $\mu$m (Sigma-Aldrich); 0.508 and 25.1 $\mu$m (Thermo Fischer Scientific)) in different concentrations were used for the absolute calibration of the LISST-VSF eyeball detector. Following [29], the absolute calibration was implemented with the equation (8)$$\beta^{\textrm{eyeball}}(\theta) = \kappa(\theta,V_0)\Big(\frac{V_0}{V}\Big)^{\gamma}P_{11}^{\textrm{uncal}}(\theta,V).$$ Here, $V$ is the PMT voltage of the respective measurement and $V_0$ is a reference voltage (selected to be 645 mV in both this and the aforementioned study). The term $(V_0/V)^\gamma$ is a conversion factor, yielding a linear relationship between $\beta ^{\textrm {eyeball}}(\theta )$ and $P_{11}^{\textrm {uncal}}(\theta ,V)$ irrespective of PMT voltage. The coefficient $\gamma$ depends on dynode material and geometry; the value used in the Hu et al. [29] study, $\gamma = 8.6$, was also in excellent agreement with our data. Finally, $\kappa (\theta ,V_0)$ is the calibration coefficient, which can be calculated from bead measurements and theoretical VSF values using linear regression of Eq. (8). 2.1.4 Ring detector calibration The LISST-VSF ring detector has been evaluated using polystyrene beads (0.508, 2.504 and 25.1 $\mu$m (Thermo Fischer Scientific)) and PMMA beads (4.92 $\mu$m(Sigma-Aldrich)) in different concentrations. We used different angular domains for each type of bead, due to factors described in section 3.1.1. As mentioned above, Koestner et al. [24] introduces a correction function CF, based on bead measurements, which is applied to correct already processed VSF measurements $\beta ^{\textrm {meas}}$. A linear relationship is assumed between the measured and true VSF, so that $\beta ^{\textrm {corr}} = \textrm {CF} \times \beta ^{\textrm {meas}}$. CF was calculated by finding the median of $\beta ^{\textrm {true}}/\beta ^{\textrm {meas}}$ for each angle, where $\beta ^{\textrm {true}}$ is the theoretical VSF (this is subsequently referred to as method 1). We use a standard least-squares fitting of the measured data to the theoretical values to compute the correction function (method 2), (9)$$\beta^{\textrm{true}} = A\beta^{\textrm{meas}},$$ and compare with the method stated in [24]. Finally, we also compare with the linear model (10)$$\beta^{\textrm{true}} = a\beta^{\textrm{meas}} + b,$$ to check for possible "zero scattering" offsets in the measured data (method 3). The LISST-200X was calibrated in a similar way. As the laser power per digital count is not known for the incident laser detector, the default output has the wrong magnitude. Different concentrations of polystyrene beads (2.504 and and 25.1 $\mu$m (Thermo Fischer Scientific)) and PMMA beads (99.0 $\mu$m (Sigma-Aldrich)) were used. The 99 $\mu$m beads were challenging to keep suspended; persistent mixing using a pipette made it possible to measure in the small LISST-200X sample chamber, but not with the LISST-VSF. 2.2 Fieldwork 2.2.1 In situ measurements During the field deployments, the LISST instrument measurements were conducted by continuous profiling down to a depth of 50 m, which is the factory-specified maximum operational depth of the LISST-VSF. After initial tests, continuous profiling was found to be the prudent choice, as stationary measurements gave highly fluctuating transmission values. This is consistent with bench-top measurements; even when the water is ultrapure, static water is detrimental for transmission measurements. We speculate that this is due to microturbulence along the beam caused by the laser, but mirror-like reflections by large slow-moving particles could also have a contribution. This is not seen for the LISST-200X, but continuous profiling is also used here for consistency. Continuous profiling also puts some constraints on the measurements. The winch system operated with an ascent and descent speed of approximately 0.5 m/s. For the LISST-200X, which has a sample rate of 1 Hz, each sample will then cover 0.5 m. With LISST-VSF, each sample takes 4 seconds. Thus, one sample will cover 2 meters. Moreover, the LISST-VSF acquires data by first doing an eyeball and ring detector measurement with perpendicular polarized incident light, then another with parallel polarized incident light. This means that the Mueller matrix components can only be reliably measured using continuous profiling in a uniform or slowly changing water column. However, the VSF is calculated from a simple average of the different light measurements. To assure high data quality of the LISST-VSF, multiple casts were always made, typically from three in regular waters to seven in very clear waters. The subsequent data processing include depth-binning the measurements and calculating the median VSF. Physical oceanographic quantities have been obtained using the Castaway-CTD or Rockland Scientific VMP-250 vertical profiler. 2.2.2 Locations In situ measurements were conducted during three field campaigns, as well as four different days at the Espegrend Marine Biology Lab in Raunefjorden outside Bergen, Norway (in April 2018, June 2018, June 2019 and November 2019), see Fig. 2 for an overview. During the INTAROS-2018 cruise with the Norwegian coastguard vessel KV Svalbard in the Arctic Ocean north of Svalbard, a total of 9 measurement stations were done. Five stations were conducted around in the region around ice edge as well as in ice leads and under ice floes. The last four stations were made in coastal waters of the Svalbard archipelago, for instance in Rijpfjorden, a fjord on northeastern Svalbard with a large glacier calving into the fjord. More measurements were performed in the central Arctic during the CAATEX-2019 cruise in August-September 2019, also with KV Svalbard. Station 1 of this cruise was conducted at the North Pole, and the proceeding stations were made in the ice-covered ocean south towards Svalbard. Finally, further measurements on glacial meltwater in Norwegian coastal waters were conducted in Gaupnefjorden in June 2019, a fjord arm of Sognefjorden in Western Norway. In total, 25 measurement stations are included in this study, with a significant span in optical characteristics as well as geographical extent. Fig. 2. Map showing the locations of the fieldwork conducted in this study. During the INTAROS-2018 cruise (in green) nine stations were conducted. Nine stations were also done during the CAATEX-2019 cruise (in red). Locations of additional fieldwork in Norwegian fjords are shown in blue. 2.2.3 In situ temperature and salinity corrections In clear waters, the scattering of the water itself may have a significant contribution to the total measured scattering at large angles [30]. Using a blank measurement will remove the scattering at the temperature and salinity of the pure water used. However, the temperature and salinity will almost never be the same in situ as the blank, making a temperature and salinity correction necessary. In a previous study, this is addressed by not using a blank for field measurements, but simply subtracting the pure water scattering directly [29]. This assumes no optical losses by the instrument, which may be negligible for new instruments but not after extensive use and time, e.g. increased transmission loss in the optical windows. Thus, we suggest another approach. The measured VSF ($\beta _m$) may be assumed to be the sum of particulate scattering $\beta _p$, pure water scattering $\beta _w$ and optical losses $\beta _L$, (11)$$\beta_m = \beta_p + \beta_w (T,S) +\beta_L.$$ We have not investigated polarization dependencies of the optical loss, but as the scattered light enters the optical window perpendicular (or near-perpendicular) to the window surface, we do not expect major polarized components. For blank measurements, particulate scattering is assumed to be zero, yielding the expression for the measured blank VSF, (12)$$\beta_{\textrm{BG}} = \beta_w(T_{\textrm{BG}},0) +\beta_L.$$ Since the optical loss term is the same in both instances, one can solve for the particulate scattering, (13)$$\beta_p = \beta_m - \beta_{\textrm{BG}} - \beta_w (T,S) + \beta_w(T_{\textrm{BG}},0).$$ Here, the term $\beta _m - \beta _{\textrm {BG}}$ is the output of the default data processing. The pure water scattering is calculated as described in [33] and [34]. The temperature and salinity from field work are interpolated from CTD measurements. 3.1 Laboratory measurements 3.1.1 Validity ranges for LISST measurements The validity range of the LISST instruments is limited by the range of the detectors, as well as the assumption that all scattered light is only scattered once (single-scattering condition). The instrumental validity ranges are also evident from the bead calibration measurements. To get stable and consistent measurements for calibration use, the VSF must vary slowly due to possible smearing effects, the oscillations characteristic for beads must be absent or smoothed out. Small beads (for instance 0.190 $\mu$m diameter) fit this requirement well, but the LISST instruments are optimized for natural waters, which have implications for the lower signal limit of the ring detectors. In Fig. 3(a), it can be seen that when cscat-values $<10^{-4}$, LISST-200X VSF measurements only have a weak relationship with theoretical VSF values compared to above this threshold. For reference, cscat values from field measurements are typically in the range $10^{-5}$ to $10^{-2}$. For LISST-VSF, the same is seen for cscat-values $<10^{2}$ in Fig. 3(b) (LISST-VSF cscat values are typically between $10^{2}$ and $10^{5}$ in field measurements). This pattern is seen for all ring detectors for similar cscat values. Given that the scattering data are related to the VSF through Eq. (7), the minimum VSF values will vary with angle. Field scattering data within these orders of magnitude should be treated with care. Within the low particle concentration limit, the transmission reaches the upper detection limit (which is given as 0.98 for LISST-VSF and 0.995 for LISST-200X). The transmission errors have a relatively low impact on scattering data in clear waters, as seen from Eq. (6). Within the high particle concentration limit, the transmission lower limit has a large impact. Following Eq. (6), the scattering data are highly sensitive for errors in the transmission measurements when the transmission is low. However, multiple scattering is a more likely limiting factor for VSF measurements in turbid waters. Re-scattered light enters the detectors in addition to the single-scattered light, leading to an overestimation of the VSF. The single-scattering condition is commonly given as $\tau ^* << 1$, where $\tau ^* = cL(1-g)$ is the scaled optical depth ($L$ is the pathlength, $c$ is attenuation), not to be confused with the transmission ratio $\tau$. The appearance of the asymmetry parameter $g$ shows that for waters with smaller angular differences in the VSF, the single-scattering condition will be violated at lower concentrations than in waters with a dominant forward scattering component. When planning the measurements, we used $\tau ^* \leq 0.1$ as a default condition. Fig. 3. The VSF of polystyrene or PMMA beads, measured with the innermost ring detector on both instruments and compared with the theoretical values computed using Mie theory. Under a certain threshold in the scattering data (cscat), there is a loss of linearity between measured and predicted values. Due to the area of the rings, this problem is most prevalent for the innermost rings, and dissipates at larger angles, where the size of the detector is larger. The expected range of possible bead concentrations due to the some of the aforementioned factors are visualized in Fig. 4. Here, Mie calculations have been applied to polystyrene beads of diameters 0.1-200 $\mu$m, and for the wavelengths of both LISST instruments. There is a scattering maximum for particle diameters approximately twice the wavelength, leading to a minimum of the detectable volume concentration. For smaller particles in the Rayleigh limit, the range of valid volume concentration becomes smaller. Large particles can have much higher volume concentrations without violating the single-scattering condition, and lower concentrations to get strong signals for the ring detector. However, large beads are limited by oscillations for larger angles. Measurements at these angles were excluded in the calibrations. Thus, for calibrating LISST instruments at all angles, it is necessary to include measurements with both smaller and larger beads. Fig. 4. Expected concentration range for valid measurements calculated for mono-dispersed polystyrene beads, varying in size and diameter. Red area shows the valid area for LISST-200X (wavelength 670 nm) and green area for LISST-VSF (wavelength 515 nm). The strongly colored areas indicate the concentration range needed to get valid measurements for the innermost rings, while the weakly colored areas indicate concentrations for valid signal from the outermost rings. The areas are overlapping, except in a small region with low concentrations of large beads. Here, scattering is so forward-peaked that there is a valid signal for the inner ring but not the outer ring. It should be noted that these results do not extend to particle size distributions. In this study, we have used six bead diameters covering different angular domains. When calibrating the LISST-VSF, 0.190 $\mu$m beads were used for angles above $15^{\circ }$, and 0.508 $\mu$m beads above $5.5^{\circ }$. Furthermore, 2.504 $\mu$m beads were used for angles below $4.7^{\circ }$ and 3.92 $\mu$m beads below $2.1^{\circ }$ (both with limited signal under $\sim 0.3^{\circ }$). Finally, 25.1 $\mu$m beads were used in the angular domains $0.09^{\circ }-0.75^{\circ }$ and $4.7^{\circ }-14.4^{\circ }$. For the LISST-200X, 2.504 $\mu$m beads were used for angles below $10^{\circ }$ (with limited signal under $0.1^{\circ }$), and 25.1 $\mu$m beads were used in the angular domains $0.07^{\circ }-1^{\circ }$ and $3.5^{\circ }-13^{\circ }$. In addition, 99 $\mu$m beads were of particular importance to get data for the innermost rings, covering the angular domains $0.04^{\circ }-0.2^{\circ }$ and $2.5^{\circ }-13^{\circ }$. 3.1.2 Bead attenuation measurements The results of the attenuation measurements are shown in Fig. 5(a) and 5(b). The measurements show overall high agreement with the theoretical values when transmission values are higher than 98 $\%$. For lower attenuation values, the measurements become imprecise. Some measurements with the LISST-200X have high variability also for higher attenuation. The 99 $\mu$m beads used were difficult to sufficiently mix for avoiding settling. Other deviations may be explained by uncertainties in the volume concentration. Comparing the two instruments, the results support the notion that LISST-VSF is suited for attenuation measurements in all but extremely clear natural waters ($c > 0.13$ m$^{-1}$), while the LISST-200X is more limited ($c > 0.8$ m$^{-1}$). The upper limit of neither instrument has been reached, the LISST-VSF results show good accuracy up to $\sim 30$ m$^{-1}$, above the specified limit. Fig. 5. Attenuation measurements with the LISST-200X and LISST-VSF using plastic beads; comparison between theoretical values from Mie theory and measured values. Error bars indicate the standard deviations in each measurement, consisting typically of around 100 samples. The factory-specified limits for valid transmittance measurements $\tau = \exp {(-cL)}$ are also plotted. The PMT correction of the LISST-VSF was made with 0.190, 0.508 and 25.1 $\mu$m polystyrene beads over a large concentration range. Similar to the Hu et al. study [29], a strong linear relationship between the $P_{11}^{\textrm {uncal}}(\theta ,\textrm {645mV})$ and $\beta ^{\textrm {eyeball}}(\theta )$ can be seen in Fig. 6(a) and 6(b). However, the $\kappa$-value is three orders of magnitude smaller for our instrument (SN = 1667), likely because the aforementioned study normalized the $P_{11}^{\textrm {uncal}}$ by dividing on incident laser power (which could mitigate effects of laser drift). For low 25.1 $\mu$m bead concentrations, which give the lowest VSF values in Fig. 6(a), there is significant variation in the data, possibly due to a relatively low PMT gain compared to the signal. Lack of rescaling (see Eq. (5)), due to attenuation values outside instrument range, may cause additional uncertainties. It should also be noted that comparing goodness-of-fit across the entire angular domain with statistical quantities ($r^2$ or mean square error) should be treated with care, due to large variations in the dynamical range of the measurements, but manual inspection of the data confirms good agreement. Fig. 6. The absolute correction for the eyeball detector is found by linear regression of $P_{11}^{\textrm {uncal}}(\theta ,\textrm {645 mV})$-values and theoretical VSF-values for each measured angle. In (a), uncalibrated LISST-VSF eyeball detector data $P_{11}^{\textrm {uncal}}(\theta ,\textrm {645 mV})$ (converted to PMT = 645 mV, see Eq. (8)) are compared with corresponding theoretical VSF values for $\theta = 60^{\circ }$. Different colors differentiate the PMT values. The linear regression yielding the $\kappa$-value is also plotted. In (b), the absolute calibration factor $\kappa$ is plotted over the entire angular domain of the LISST-VSF eyeball detector output (blue line). The coefficient of determination ($r^2$-value) is also shown for each angle. 3.1.4 LISST-VSF ring detector validation Measurements of plastic beads up to 25.1 $\mu$m were used to calculate correction functions for the LISST-VSF ring detector. The results are shown in Fig. 7. Near-unity correction functions and $r^2$-values for all methods would indicate a perfect fit between theory and measurements. Figure 7(a) and 7(b) reveal that the ring detector measurements generally agree well with the expected theoretical values for all three methods used. The linearity of the data is illustrated in Fig. 9(a). The correction function found using the median of the ratio (method 1) deviates from the linear regression functions (method 2 and 3) at some angles. The reason seems to be that it is more influenced by where in the scattering range the majority of the measurements have been made. There are no significant differences between the perpendicular (Fig. 7) and parallel (not shown) polarized incident light, as expected for forward scattering. Measurements from the ring detector at $0.90^{\circ }$ were highly erratic and non-physical, also in blank measurements. Results from this ring have thus been consistently treated as invalid and replaced by interpolated values using the two neighbor rings, even though this may introduce a small bias. Fig. 7. Correction functions for the LISST-VSF ring detector, for the perpendicular incident beam (first rotation, in the vertical plane of the instrument), is shown in (a). Each method is described in section 2.1.4. The coefficient of determination ($r^2$-value) for each method is shown in (b). Significantly higher forward scattering than expected from Mie theory were measured for sub-micron beads. The forward scattering varied between measurement series and was observed to increase with time. We speculate that this is due to flocculation of small beads, as it has been shown that particle flocculations may appear as larger particles in LISST particle size distribution measurements [35], which is consistent with higher forward scattering. Other studies have used an ultrasonic device to break up the flocs [11,23], which would likely eliminate this error source. Moreover, smaller beads scatters so little in the forward direction that the ring detector scattering data are under the instrument detection level. Data affected by these error sources were discarded. Multiple scattering influences the measurements, which is seen as a non-linear relationship between bead concentration and VSF. This is plotted for 25.1 $\mu$m beads in Fig. 10(a). The single-scattering condition has been used as a guideline ($\tau * <0.1$), but results shown in Fig. 10 indicate that it may not be an adequate condition for calibration purposes, especially for larger beads such as 25 $\mu$m at large angles. Thus, some empirical considerations had to be made for the calibration concentration range. Slade and Boss [11] points out the imaginary refractive index of the bead material as a major error source for VSF measurements of larger beads, but based on Fig. 10 we believe multiple scattering plays a more significant role than expected. Variations between different particle samples and their dilutions seem to be the largest source of uncertainty. The impact has been mitigated by doing multiple independent measurement series with varying particle diameter and applying the attenuation re-scaling. Nevertheless, it is reasonable to conclude that the deviations may be attributed to the experimental uncertainties, and that the LISST-VSF ring detector is adequately calibrated from the factory. 3.1.5 LISST-200X ring detector calibration While the VSF is a default output of the factory data processing for the LISST-VSF, the LISST-200X data processing does not yield the VSF by default. However, Sequoia Sci. provided a data processing script enabling non-calibrated VSF measurements as output. Here, the correction functions were used directly for absolute calibration of the measurements. Results shown in Fig. 8 reveal similarities with the LISST-VSF ring detector, including most of the error sources. Figure 9(b) shows an example of robust fit of the data at $0.66^{\circ }$. The correction factors vary between $1.6 \times 10^{11}$ and $3.6 \times 10^{11}$. Method 1 deviates slightly from the two linear regression-methods (method 2 and 3), but follows same the general trend. In Fig. 10(b), one may also for LISST-200X observe a non-linear relationship between attenuation and measured scattering at large angles for 25 $\mu$m beads. In addition, there were some saturation errors in the ring detector data. These measurements had to be manually removed. Following the same considerations as for the LISST-VSF, a constant value, $A = 2.8 \times 10^{11}$, was chosen for all rings. Fig. 8. Correction functions for LISST-200X ring detector is shown in (a). Each method is described in section 2.1.4. The coefficient of determination ($r^2$-value) for each method is shown in (b). Fig. 9. Measured VSF from ring 16 of the LISST-VSF ring detector (total ring number is 32), is compared with theoretically predicted VSF values in (a). The standard deviation of the measurements are plotted as error bars. A robust linear fit (method 2) is also plotted as a black line. In (b), uncalibrated VSF values from the LISST-200X are plotted with corresponding theoretical VSF values, along with the robust fit (method 2, showing ring number 18 out of 36). Fig. 10. VSF measurements with 25.1 $\mu$m beads are plotted as a function of the attenuation, for the outermost ring of LISST-VSF (a) and LISST-200X (b). Theoretically predicted scattering is plotted as a black line. Maximum scaled optical depth for the LISST-VSF results is $cL(1-g) \sim 0.02$, for LISST-200X the maximum value is $cL(1-g) \sim 0.05$. Non-linear behaviour can be seen for a large range of concentrations. In the innermost rings the non-linear behaviour is absent (not plotted). 3.2 Field measurements 3.2.1 Assessment of PMT calibration In situ field measurements with particulate scattering covering several orders of magnitude enable robust comparisons between the absolute and relative PMT calibration as well as between the two LISST instruments. A natural point of comparison for the relative and absolute calibration is the VSF at $15^{\circ }$, the start of the eyeball measurement, plotted in Fig. 11. The two methods agree well for mid-range scattering, while discrepancies are apparent in very clear and turbid waters. For clear waters, systematic discrepancies may be seen for PMT-values 435-550. These measurements are from the INTAROS-2018 cruise, where the automatic PMT gain adjustment seemed to insufficiently adjust to very clear waters, yielding noisy eyeball data. After the cruise, the PMT-gain algorithm was updated by the producer, yielding significantly better results for later fieldwork. The absolute calibration was performed after this update. While the PMT gain may be set manually, the automatic gain is typically necessary due to water column variations. Moreover, for the CAATEX-2019 cruise, unreliable values in the outermost ring yielded artificially low eyeball values for the relative calibration, perfectly illustrating the uncertainty of this method. The relative calibration is also visibly affected by random errors in the forward scattering in addition to the systematic errors in clear waters. In turbid waters, a discontinuity is visible in the VSF between the ring and eyeball detector, at 15$^{^{\circ }}$ when the absolute calibration is used (see Fig. 14). This is due to multiple scattering effects, as the two detectors have different pathlengths at 15$^{\circ }$. The eyeball detector may also experience saturation in particularly turbid media. Fig. 11. Comparison of LISST-VSF measurements at $15^{\circ }$, using relative (default calibration) and absolute calibration. Each color represents a different PMT value used in the calibration. As the PMT values may change throughout a profile, each measurement is plotted. While most measurements are close to the 1:1 line, there are differences in turbid and very clear waters. The scattering coefficient, backscattering coefficient and backscattering fraction for the two calibrations are compared in Fig. 12(a) and 12(b). All are integrated from bin-median VSF, with extrapolation in the backward direction by using a well-established backscattering model [9]. For $b$, the differences are minimal, due to the dominating contribution of forward scattering to the scattering coefficient. By contrast, $b_b$ shows more discrepancies, especially the effects of multiple scattering are apparent. Fig. 12. Comparison of the particulate scattering coefficient $b$ (a), the particulate backscattering coefficient $b_b$ (b), when using the relative and absolute calibration. 3.2.2 Temperature and salinity corrections In Fig. 13, absolute calibrated LISST-VSF measurements are plotted with the offset due to pure water scattering, computed from temperature and salinity interpolated from CTD measurements. It is clear that a temperature and salinity correction is important for clear waters, but it gives a negligible contribution at small angles or in turbid waters. LISST-200X has a measurement domain which does not reflect a need for a temperature and salinity correction of the VSF. The importance of auxiliary CTD casts is evident in almost all investigated waters, as changes in both optical and physical quantities are often significant in the upper water column. Natural waters with high salinity and low temperatures have the highest pure water scattering, with the salinity making the strongest contribution. Most physical and empirical models that are used in the computation of pure water scattering, have a validity range down to 0 $^{\circ }$C. In polar surface waters, the water temperature can be lower than -1.5 $^{\circ }$C. Few studies have investigated optical properties at such temperatures. A theoretical model for volume scattering function by pure seawater was recently extended to subzero temperatures by [36]. For in situ measurements, one also need to consider possible offsets in light attenuation and the refractive index [37]. In particular, changes in the latter leads to a different transmittance at the interface between water and optical windows. Estimates show that these effects combined can give an absolute error in the attenuation up to $\sim$0.01 m$^{-1}$ for LISST-VSF, and $\sim$0.05 m$^{-1}$ for LISST-200X. VSF measurements are less affected, with relative errors on the order of $\sim 10^{-3}$. Controlled validation measurements at subzero temperatures and high salinity are needed for more accurate error estimates, instrument-specific corrections, and validation of the theoretical scattering model. Fig. 13. LISST-VSF measurements from a large selection of field measurements (median between 20 and 50 meters), are plotted in green, before correction of temperature and salinity. The temperature and salinity correction offset for each VSF measurement, computed from theoretical pure water scattering [33], is plotted as black dashed lines. Thus, the lines form a band of typical VSF offsets in natural sea water. 3.2.3 Volume scattering function measurements A selection of particulate VSF measurements in different natural waters is shown in Fig. 14. The values are calculated from the median of all valid data within a given depth-interval (with low variability) at each measurement site. The clearest waters measured in the central Arctic had minimum VSF values of $\sim 10^{-5}$, with considerable noise even after averaging 138 samples. Only slightly higher scattering, seen at the North Pole station, gives a much less noisy signal. On the upper turbidity limit is glacial meltwater. Here, the measured VSF around $90^{\circ }$ is more than five orders of magnitude larger than in the clearest measurements. However, the aforementioned discontinuity between the detectors can be seen at 15$^{\circ }$, revealing multiple scattering effects and suggesting that the detected VSF magnitude may be incorrect. Variations in the phase function are also evident. Fig. 14. A selection of in situ particulate VSF measurements with the LISST-VSF in highly varying natural waters, from the Norwegian coast to the North Pole. The extremities of the instrument validity range can be observed. In some of the LISST-VSF measurements, a dip can be seen around 120$^{\circ }$ in the uppermost 10-20 meters. This is most likely because the field-of-view of the eyeball detector moves from being directed at the open environment to being directed at the instrument wall at $\sim$120$^{\circ }$, leading to an artifact in the ambient light rejection. For angles above 145-150$^{\circ }$, elevated VSF values can also be frequently observed (see Fig. 13), both for laboratory and field measurements. The cause is probably instrumental reflections, but backscattering from bubbles could also have an additional contribution. In Fig. 15, VSF measurements of the LISST-VSF and LISST-200X are compared for three cases. In the lowermost VSF (dashed line), large parts of the LISST-200X falls under the noise level indicated with a solid black line (chosen as cscat = $5 \times 10^{-5}$). The LISST-200X frequently measures scattering under the lower detection level in clear waters, resulting in a unreliable and limited VSF. For the middle case (dashed-dotted line), both instruments perform well. Here, the shape of the VSF agree well, and there is a reasonable increase in scattering from 670 to 515 nm. However, for turbid waters (solid line), the LISST-VSF measures VSF values $\sim$25-500 times higher than corresponding LISST-200X values. These severely elevated measurements are due to multiple scattering. Moreover, the flattening of the LISST-VSF phase function close to zero has been shown to be due to saturated ring detectors. The LISST-200X is with the 2.5 cm pathlength much less influenced by the mentioned effects. Fig. 15. The VSF measured with the LISST-200X and the LISST-VSF are compared for three different cases. Estimated minimum VSF-values that can be measured by the LISST-200X are indicated as a solid red line. Strong systematic errors due to multiple scattering are apparent for the LISST-VSF in turbid glacial meltwater. 3.2.4 Estimating scattering coefficient from forward scattering The dominance of forward scattering on the scattering coefficient ($\beta (\theta <13^{\circ })$ contains on average $\sim$80$\%$ of $b$) indicates the possibility that LISST-200X measurements can be used to estimate $b(670 \textrm {nm})$. LISST-VSF measurements can be used for a robust evaluation, by computing $b$ from both the entire VSF measurement and only the VSF of the LISST-200X angular domain (0.08-13$^{\circ }$). The former is computed by using the LISST-VSF measurement up to 145$^{\circ }$ and the backscattering extrapolation (described in Zhang et al. [9]) up to 180$^{\circ }$, which can be assumed to be close to the correct $b$. The latter is computed by curve-fitting the Fournier-Forand VSF to the ring data (up to 13.2$^{\circ }$), which is also used for the LISST-200X scattering. An example is plotted in Fig. 16. The forward scattering extrapolation tends to systematically overestimate backscattering. Fig. 16. Plot of LISST-VSF and LISST-200X measurements done at the same location and depth (Emiliania Huxleyi bloom, 0.5-10 m), together with extrapolations relevant for estimations of $b$. In Fig. 17, the two calculated scattering coefficients are compared for a large selection of VSF measurements. Overall, the extrapolated scattering coefficient $b_{\textrm {FF}}$ agrees well with the assumed correct value $b_{\textrm {corr}}$. A tendency to overestimate the scattering for $b > 1$ $\textrm {m}^{-1}$ can be seen, but the overestimation of the backscattering has a relatively small impact. Linear regression for $b < 5$ m$^{-1}$ gives the correlation shown in Fig. 17, with a 95$\%$ confidence interval (assuming a Gaussian distribution) of $\pm 20\%$ on the estimate. Limiting the regression to $b < 1$ $\textrm {m}^{-1}$ decreases the confidence interval to $\pm 6\%$. Fig. 17. The scattering coefficient estimated from LISST-VSF forward scattering ($0.09^{\circ }$-$13.2^{\circ }$) compared with the scattering coefficient calculated from the full LISST-VSF measurement ($0.09^{\circ }$-$145^{\circ }$). Scattering coefficients estimated from LISST-200X measurements (excluding VSF data under the detection limit) are compared with LISST-VSF scattering coefficients measured in the same waters in Fig. 18. Large variations are evident, but the expected trend of generally higher scattering at 515 than 670 nm can be seen. It is also clear that the LISST-200X $b$-estimates are much less affected by multiple scattering than the LISST-VSF values. The related deviations seem to take place above $\sim$2 m$^{-1}$. Fig. 18. Comparison of scattering coefficients, measured with LISST-VSF and estimated from LISST-200X forward scattering. Large variations can be seen, and LISST-VSF multiple scattering errors seems to occur from $\sim$2 m$^{-1}$. 3.2.5 Schlieren effect The Schlieren effect is a scattering phenomenon caused by microturbulence and refractive index variations, a prevalent error source for scattering and transmission measurements in stratified natural waters. As Schlieren causes elevated forward scattering, it is primarily affecting the transmissometer and the innermost rings (similar to large particles, leading to errors in PSD calculations). Concurrent profiles with both LISST instruments and CTD instruments make it possible to investigate this effect. This is shown in Fig. 19, where transmission and scattering on the innermost ring of both LISST instruments are plotted with the buoyancy frequency. The buoyancy frequency is a widely used measure on stratification in oceanography, and has in previous studies been linked to Schlieren effects [16–18]. Figure 19(a) shows a clear decrease in LISST-VSF transmission measurements for increased buoyancy frequencies from 0.05 s$^{-1}$. For buoyancy frequencies at 0.15 s$^{-1}$ and higher, many transmission measurements are close to being completely extinguished. However, perhaps the most striking feature is the absence of data points with both high transmission and buoyancy frequency; there seems to be an upper (lower) limit on the transmission (attenuation), linearly dependent on the buoyancy frequency. A trend is less clear for LISST-200X (Fig. 19(b)), due to the shorter pathlength, but above a buoyancy frequency of $\sim$0.15 s$^{-1}$, large fluctuations in the transmission are prevalent. Increases in the measured forward scattering can also be seen for both instruments (Fig. 19(c) and 19(d)). Ring saturation is also apparent in the LISST-VSF plot. However, it should be emphasized that the scattering measurements are coupled to the transmission through Eq. (6). Thus, suppressed transmission is likely a larger error source for VSF measurements in stratified waters than elevated forward scattering. A final consideration regarding these measurements is that the water density gradient (pycnocline) is associated with particle accumulation and flocculation. Hence, also larger particulate scattering can be expected here, and separating the two phenomena is a considerable challenge. Fig. 19. Optical measurements plotted as a function of the buoyancy frequency, visualizing the Schlieren effect on LISST instruments; the transmission of LISST-VSF (a), the transmission of LISST-200X (b), the scattering on the inner-most LISST-VSF ring (c), and the scattering on the inner-most LISST-200X ring (d). VSF measurements using the LISST-VSF and LISST-200X have been found to be valid over several orders of magnitude, making them valuable for further in situ and laboratory research. Bench-top measurements using monodispersed beads enable absolute calibration of the instrument detectors, but several considerations must be made with regards to instrument noise level, VSF oscillations, possible effects of bead flocculation and multiple scattering. We repeat in large parts procedures performed in earlier studies [11,24,29], but extend the eyeball calibration to a larger range. While the factory calibration of the LISST-VSF ring detector was shown to be satisfactory, the absolute calibration of the eyeball detector has greatly improved the robustness of the VSF measurements, avoiding significant propagation of uncertainties from the two outermost ring detectors to the entire eyeball detector domain. Having two independent detectors with different pathlengths also reveals multiple scattering effects in turbid waters. However, using the absolute calibration requires that the PMT gain adjusts itself adequately to the particulate scattering. The lower thresholds of the LISST ring detectors have been given as angular dependent values (cscat $\geq 10^{-4}$ for LISST-200X, cscat $\geq 10^{2}$ for LISST-VSF), but note that these are order of magnitude numbers. The lower limit of the LISST-VSF eyeball detector depends on the PMT gain, and ambient light conditions under some circumstances. LISST-VSF and LISST-200X have been extensively used in field campaigns, giving valuable knowledge on how to to acquire high-quality data. For LISST-VSF, the water within the sample volume (beam area) must not be static while sampling, either in bench-top mode or during field deployment. The consequence of static water is large fluctuations in transmission and forward scattering. Our speculation is that the laser heats up the water enough to cause microturbulence effects. Hence, continuous descent and ascent is considered best practice during field deployment. For logistical reasons, a profiling speed of approximately 0.5 m/s has primarily been used, but a test with speeds down to 0.1-0.2 m/s have also produced good results. Due to the slow sample rate, we recommend using the lowest practical profiling velocity with the LISST-VSF. Another issue is collecting enough measurements for robust results, which is solved by doing multiple casts and calculating the median VSF, binned with respect to depth. For the LISST-200X, the deployment method is more flexible, but a similar continuous profiling protocol have been used. It has also been shown that temperature and salinity corrections are necessary for LISST-VSF measurements in very clear waters, but are not relevant for LISST-200X measurements. Comparing the two instruments, their configuration makes them optimized for scattering measurements in different types of natural waters. The LISST-VSF, with its long pathlength, higher laser power and low sample rate, suits clear waters with low scattering, but also coastal waters. In turbid waters with scattering coefficients above approximately 2 m$^{-1}$, multiple scattering errors become significant, but further investigation is needed for details about the effects. LISST-200X is more suited for such waters, with a short pathlength and higher sample rate (which can detect more small-scale variations). However, the scattering and transmission detection levels makes it less suitable for measurements in clear natural waters. The configuration of the innermost rings makes it possible to detect scattering from less than 0.1$^{\circ }$, but also makes the measurements vulnerable to errors due to the Schlieren effect. Schlieren has also been shown to significantly affect transmittance measurements, especially for buoyancy frequencies above 0.15 s$^{-1}$, which can have a severe effect on scattering measurements at all angles, as they are corrected by being divided on the transmittance (see Eq. (6)). Thus, care must be taken for measurements in stratified waters. Even though LISST-200X only measures the VSF up to 13$^{\circ }$, it has been shown that by curve fitting the Fournier-Forand phase function to the data, a good estimate of the scattering coefficient at 670 nm can be found. Thus, combined with attenuation measurements, the absorption at 670 nm can be calculated from $a = c - b$. Existing in situ spectrophotometers often yield measurements with large scattering-related uncertainties at longer wavelengths [38], especially in turbid waters. Due to its 2.5 cm path-length, the LISST-200X is less susceptible to multiple scattering errors, and may thus yield absorption measurements of higher accuracy than existing instrumentation in such waters. The instrument wavelength (670 nm) lies close to the chlorophyll-a pigment absorption peak at 676 nm. Thus, the instrument may be relevant for use with hyper-spectral in situ instrumentation for improved retrieval of primary production estimates in optically complex waters. We thank the Norwegian Coast Guard, and especially the crew onboard KV Svalbard, for allocation of ship time and excellent field support during the cruise. We also thank the INTAROS and CAATEX project and NERSC for inclusion in the research cruises and support with the cruises. CTD measurements from six stations on the INTAROS cruise were provided by Waldemar Walczowski, for which we are grateful. Moreover, Espegrend Marine Biology Lab have been very helpful with the fieldwork in Raunefjorden. Thomas Leeuw at Sequoia Scientific has also been of great help with providing us additional code for the LISST-instruments and answering questions regarding the instruments. Finally, we thank three anonymous reviewers for valuable reviews, which helped us improve upon our manuscript. 1. D. Blondeau-Patissier, J. F. Gower, A. G. Dekker, S. R. Phinn, and V. E. Brando, "A review of ocean color remote sensing methods and statistical techniques for the detection, mapping and analysis of phytoplankton blooms in coastal and open oceans," Prog. Oceanogr. 123, 123–144 (2014). [CrossRef] 2. P. J. Werdell, L. I. McKinna, E. Boss, S. G. Ackleson, S. E. Craig, W. W. Gregg, Z. Lee, S. Maritorena, C. S. Roesler, C. S. Rousseaux, D. Stramski, J. M. Sullivan, M. S. Twardowski, M. Tzortziou, and X. Zhang, "An overview of approaches and challenges for retrieving marine inherent optical properties from ocean color remote sensing," Prog. Oceanogr. 160, 186–212 (2018). [CrossRef] 3. Y. Agrawal and H. Pottsmith, "Instruments for particle size and settling velocity observations in sediment transport," Mar. Geol. 168(1-4), 89–114 (2000). [CrossRef] 4. X. Zhang, M. Twardowski, and M. Lewis, "Retrieving composition and sizes of oceanic particle subpopulations from the volume scattering function," Appl. Opt. 50(9), 1240–1259 (2011). [CrossRef] 5. N. D. Stockley, R. Röttgers, D. McKee, I. Lefering, J. M. Sullivan, and M. S. Twardowski, "Assessing uncertainties in scattering correction algorithms for reflective tube absorption measurements made with a wet labs ac-9," Opt. Express 25(24), A1139–A1153 (2017). [CrossRef] 6. G. Mie, "Beiträge zur optik trüber medien, speziell kolloidaler metallösungen," Ann. Phys. 330(3), 377–445 (1908). [CrossRef] 7. M. I. Mishchenko, L. D. Travis, and D. W. Mackowski, "T-matrix computations of light scattering by nonspherical particles: a review," J. Quant. Spectrosc. Radiat. Transfer 55(5), 535–575 (1996). [CrossRef] 8. G. R. Fournier and J. L. Forand, "Analytic phase function for ocean water," in Ocean Optics XII, vol. 2258 (International Society for Optics and Photonics, 1994), pp. 194–201. 9. X. Zhang, G. R. Fournier, and D. J. Gray, "Interpretation of scattering by oceanic particles around 120 degrees and its implication in ocean color studies," Opt. Express 25(4), A191–A199 (2017). [CrossRef] 10. Y. C. Agrawal, "The optical volume scattering function: Temporal and vertical variability in the water column off the new jersey coast," Limnol. Oceanogr. 50(6), 1787–1794 (2005). [CrossRef] 11. W. H. Slade and E. S. Boss, "Calibrated near-forward volume scattering function obtained from the lisst particle sizer," Opt. Express 14(8), 3602–3615 (2006). [CrossRef] 12. E. Boss, W. H. Slade, M. Behrenfeld, and G. Dall'Olmo, "Acceptance angle effects on the beam attenuation in the ocean," Opt. Express 17(3), 1535–1550 (2009). [CrossRef] 13. Y. Agrawal and O. A. Mikkelsen, "Empirical forward scattering phase functions from 0.08 to 16 deg. for randomly shaped terrigenous 1–21 μm sediment grains," Opt. Express 17(11), 8805–8814 (2009). [CrossRef] 14. L. Mullen, D. Alley, and B. Cochenour, "Investigation of the effect of scattering agent and scattering albedo on modulated light propagation in water," Appl. Opt. 50(10), 1396–1404 (2011). [CrossRef] 15. X. Zhang, D. J. Gray, Y. Huot, Y. You, and L. Bi, "Comparison of optically derived particle size distributions: scattering over the full angular range versus diffraction at near forward angles," Appl. Opt. 51(21), 5085–5099 (2012). [CrossRef] 16. R. Styles, "Laboratory evaluation of the lisst in a stratified fluid," Mar. Geol. 227(1-2), 151–162 (2006). [CrossRef] 17. O. A. Mikkelsen, T. G. Milligan, P. S. Hill, R. J. Chant, C. F. Jago, S. E. Jones, V. Krivtsov, and G. Mitchelson-Jacob, "The influence of schlieren on in situ optical measurements used for particle characterization," Limnol. Oceanogr.: Methods 6(3), 133–143 (2008). [CrossRef] 18. A. Karageorgis, D. Georgopoulos, W. Gardner, O. Mikkelsen, and D. Velaoras, "How schlieren affects beam transmissometers and lisst-deep: an example from the stratified danube river delta, nw black sea," Medit. Mar. Sci. 16(2), 366–372 (2015). [CrossRef] 19. J. E. Tyler, "Scattering properties of distilled and natural waters 1," Limnol. Oceanogr. 6(4), 451–456 (1961). [CrossRef] 20. T. J. Petzold, Volume scattering functions for selected ocean waters, Tech. rep., Scripps Institution of Oceanography La Jolla Ca Visibility Lab (1972). 21. E. Marken, N. Ssebiyonga, J. K. Lotsberg, J. J. Stamnes, B. Hamre, Ø. Frette, A. S. Kristoffersen, and S. R. Erga, "Measurement and modeling of volume scattering functions for phytoplankton from norwegian coastal waters," J. Mar. Res. 75(5), 579–603 (2017). [CrossRef] 22. W. H. Slade, Y. C. Agrawal, and O. A. Mikkelsen, "Comparison of measured and theoretical scattering and polarization properties of narrow size range irregular sediment particles," in 2013 OCEANS-San Diego, (IEEE, 2013), pp. 1–6. 23. T. Harmel, M. Hieronymi, W. Slade, R. Röttgers, F. Roullier, and M. Chami, "Laboratory experiments for inter-comparison of three volume scattering meters to measure angular scattering properties of hydrosols," Opt. Express 24(2), A234–A256 (2016). [CrossRef] 24. D. Koestner, D. Stramski, and R. A. Reynolds, "Measurements of the volume scattering function and the degree of linear polarization of light scattered by contrasting natural assemblages of marine particles," Appl. Sci. 8(12), 2690 (2018). [CrossRef] 25. D. Koestner, D. Stramski, and R. A. Reynolds, "Polarized light scattering measurements as a means to characterize particle size and composition of natural assemblages of marine particles: publisher's note," Appl. Opt. 59(29), 9233 (2020). [CrossRef] 26. B. Cochenour, S. P. O'Connor, and L. J. Mullen, "Suppression of forward-scattered light using high-frequency intensity modulation," Opt. Eng. 53(5), 051406 (2013). [CrossRef] 27. B. Cochenour, K. Dunn, A. Laux, and L. Mullen, "Experimental measurements of the magnitude and phase response of high-frequency modulated light underwater," Appl. Opt. 56(14), 4019–4024 (2017). [CrossRef] 28. R. Sahoo, P. Shanmugam, and S. K. Sahu, "Impact of air–sea interface effects and bubble and particulate scattering on underwater light field distribution: An implication to underwater wireless optical communication system," in Optical and Wireless Technologies, (Springer, 2020), pp. 171–178. 29. L. Hu, X. Zhang, Y. Xiong, and M.-X. He, "Calibration of the LISST-VSF to derive the volume scattering functions in clear waters," Opt. Express 27(16), A1188–A1206 (2019). [CrossRef] 30. X. Zhang, L. Hu, Y. Xiong, Y. Huot, and D. Gray, "Experimental estimates of optical backscattering associated with submicron particles in clear oceanic waters," Geophys. Res. Lett. 47(4), e2020GL087100 (2020). [CrossRef] 31. X. Ma, J. Q. Lu, R. S. Brock, K. M. Jacobs, P. Yang, and X.-H. Hu, "Determination of complex refractive index of polystyrene microspheres from 370 to 1610 nm," Phys. Med. Biol. 48(24), 4165–4172 (2003). [CrossRef] 32. S. N. Kasarova, N. G. Sultanova, C. D. Ivanov, and I. D. Nikolov, "Analysis of the dispersion of optical plastic materials," Opt. Mater. 29(11), 1481–1490 (2007). [CrossRef] 33. X. Zhang, L. Hu, and M.-X. He, "Scattering by pure seawater: effect of salinity," Opt. Express 17(7), 5698–5710 (2009). [CrossRef] 34. X. Zhang, D. Stramski, R. A. Reynolds, and E. R. Blocker, "Light scattering by pure water and seawater: the depolarization ratio and its variation with salinity," Appl. Opt. 58(4), 991–1004 (2019). [CrossRef] 35. W. H. Slade, E. Boss, and C. Russo, "Effects of particle aggregation and disaggregation on their inherent optical properties," Opt. Express 19(9), 7945–7959 (2011). [CrossRef] 36. L. Hu, X. Zhang, and M. J. Perry, "Light scattering by pure seawater at subzero temperatures," Deep Sea Res., Part I 162, 103306 (2020). [CrossRef] 37. R. Röttgers, D. McKee, and C. Utschig, "Temperature and salinity correction coefficients for light absorption by water in the visible to infrared spectral region," Opt. Express 22(21), 25093–25108 (2014). [CrossRef] 38. R. Röttgers, D. McKee, and S. B. Woźniak, "Evaluation of scatter corrections for ac-9 absorption measurements in coastal waters," Methods Oceanogr. 7, 21–39 (2013). [CrossRef] D. Blondeau-Patissier, J. F. Gower, A. G. Dekker, S. R. Phinn, and V. E. Brando, "A review of ocean color remote sensing methods and statistical techniques for the detection, mapping and analysis of phytoplankton blooms in coastal and open oceans," Prog. Oceanogr. 123, 123–144 (2014). P. J. Werdell, L. I. McKinna, E. Boss, S. G. Ackleson, S. E. Craig, W. W. Gregg, Z. Lee, S. Maritorena, C. S. Roesler, C. S. Rousseaux, D. Stramski, J. M. Sullivan, M. S. Twardowski, M. Tzortziou, and X. Zhang, "An overview of approaches and challenges for retrieving marine inherent optical properties from ocean color remote sensing," Prog. Oceanogr. 160, 186–212 (2018). Y. Agrawal and H. Pottsmith, "Instruments for particle size and settling velocity observations in sediment transport," Mar. Geol. 168(1-4), 89–114 (2000). X. Zhang, M. Twardowski, and M. Lewis, "Retrieving composition and sizes of oceanic particle subpopulations from the volume scattering function," Appl. Opt. 50(9), 1240–1259 (2011). N. D. Stockley, R. Röttgers, D. McKee, I. Lefering, J. M. Sullivan, and M. S. Twardowski, "Assessing uncertainties in scattering correction algorithms for reflective tube absorption measurements made with a wet labs ac-9," Opt. Express 25(24), A1139–A1153 (2017). G. Mie, "Beiträge zur optik trüber medien, speziell kolloidaler metallösungen," Ann. Phys. 330(3), 377–445 (1908). M. I. Mishchenko, L. D. Travis, and D. W. Mackowski, "T-matrix computations of light scattering by nonspherical particles: a review," J. Quant. Spectrosc. Radiat. Transfer 55(5), 535–575 (1996). G. R. Fournier and J. L. Forand, "Analytic phase function for ocean water," in Ocean Optics XII, vol. 2258 (International Society for Optics and Photonics, 1994), pp. 194–201. X. Zhang, G. R. Fournier, and D. J. Gray, "Interpretation of scattering by oceanic particles around 120 degrees and its implication in ocean color studies," Opt. Express 25(4), A191–A199 (2017). Y. C. Agrawal, "The optical volume scattering function: Temporal and vertical variability in the water column off the new jersey coast," Limnol. Oceanogr. 50(6), 1787–1794 (2005). W. H. Slade and E. S. Boss, "Calibrated near-forward volume scattering function obtained from the lisst particle sizer," Opt. Express 14(8), 3602–3615 (2006). E. Boss, W. H. Slade, M. Behrenfeld, and G. Dall'Olmo, "Acceptance angle effects on the beam attenuation in the ocean," Opt. Express 17(3), 1535–1550 (2009). Y. Agrawal and O. A. Mikkelsen, "Empirical forward scattering phase functions from 0.08 to 16 deg. for randomly shaped terrigenous 1–21 μm sediment grains," Opt. Express 17(11), 8805–8814 (2009). L. Mullen, D. Alley, and B. Cochenour, "Investigation of the effect of scattering agent and scattering albedo on modulated light propagation in water," Appl. Opt. 50(10), 1396–1404 (2011). X. Zhang, D. J. Gray, Y. Huot, Y. You, and L. Bi, "Comparison of optically derived particle size distributions: scattering over the full angular range versus diffraction at near forward angles," Appl. Opt. 51(21), 5085–5099 (2012). R. Styles, "Laboratory evaluation of the lisst in a stratified fluid," Mar. Geol. 227(1-2), 151–162 (2006). O. A. Mikkelsen, T. G. Milligan, P. S. Hill, R. J. Chant, C. F. Jago, S. E. Jones, V. Krivtsov, and G. Mitchelson-Jacob, "The influence of schlieren on in situ optical measurements used for particle characterization," Limnol. Oceanogr.: Methods 6(3), 133–143 (2008). A. Karageorgis, D. Georgopoulos, W. Gardner, O. Mikkelsen, and D. Velaoras, "How schlieren affects beam transmissometers and lisst-deep: an example from the stratified danube river delta, nw black sea," Medit. Mar. Sci. 16(2), 366–372 (2015). J. E. Tyler, "Scattering properties of distilled and natural waters 1," Limnol. Oceanogr. 6(4), 451–456 (1961). T. J. Petzold, Volume scattering functions for selected ocean waters, Tech. rep., Scripps Institution of Oceanography La Jolla Ca Visibility Lab (1972). E. Marken, N. Ssebiyonga, J. K. Lotsberg, J. J. Stamnes, B. Hamre, Ø. Frette, A. S. Kristoffersen, and S. R. Erga, "Measurement and modeling of volume scattering functions for phytoplankton from norwegian coastal waters," J. Mar. Res. 75(5), 579–603 (2017). W. H. Slade, Y. C. Agrawal, and O. A. Mikkelsen, "Comparison of measured and theoretical scattering and polarization properties of narrow size range irregular sediment particles," in 2013 OCEANS-San Diego, (IEEE, 2013), pp. 1–6. T. Harmel, M. Hieronymi, W. Slade, R. Röttgers, F. Roullier, and M. Chami, "Laboratory experiments for inter-comparison of three volume scattering meters to measure angular scattering properties of hydrosols," Opt. Express 24(2), A234–A256 (2016). D. Koestner, D. Stramski, and R. A. Reynolds, "Measurements of the volume scattering function and the degree of linear polarization of light scattered by contrasting natural assemblages of marine particles," Appl. Sci. 8(12), 2690 (2018). D. Koestner, D. Stramski, and R. A. Reynolds, "Polarized light scattering measurements as a means to characterize particle size and composition of natural assemblages of marine particles: publisher's note," Appl. Opt. 59(29), 9233 (2020). B. Cochenour, S. P. O'Connor, and L. J. Mullen, "Suppression of forward-scattered light using high-frequency intensity modulation," Opt. Eng. 53(5), 051406 (2013). B. Cochenour, K. Dunn, A. Laux, and L. Mullen, "Experimental measurements of the magnitude and phase response of high-frequency modulated light underwater," Appl. Opt. 56(14), 4019–4024 (2017). R. Sahoo, P. Shanmugam, and S. K. Sahu, "Impact of air–sea interface effects and bubble and particulate scattering on underwater light field distribution: An implication to underwater wireless optical communication system," in Optical and Wireless Technologies, (Springer, 2020), pp. 171–178. L. Hu, X. Zhang, Y. Xiong, and M.-X. He, "Calibration of the LISST-VSF to derive the volume scattering functions in clear waters," Opt. Express 27(16), A1188–A1206 (2019). X. Zhang, L. Hu, Y. Xiong, Y. Huot, and D. Gray, "Experimental estimates of optical backscattering associated with submicron particles in clear oceanic waters," Geophys. Res. Lett. 47(4), e2020GL087100 (2020). X. Ma, J. Q. Lu, R. S. Brock, K. M. Jacobs, P. Yang, and X.-H. Hu, "Determination of complex refractive index of polystyrene microspheres from 370 to 1610 nm," Phys. Med. Biol. 48(24), 4165–4172 (2003). S. N. Kasarova, N. G. Sultanova, C. D. Ivanov, and I. D. Nikolov, "Analysis of the dispersion of optical plastic materials," Opt. Mater. 29(11), 1481–1490 (2007). X. Zhang, L. Hu, and M.-X. He, "Scattering by pure seawater: effect of salinity," Opt. Express 17(7), 5698–5710 (2009). X. Zhang, D. Stramski, R. A. Reynolds, and E. R. Blocker, "Light scattering by pure water and seawater: the depolarization ratio and its variation with salinity," Appl. Opt. 58(4), 991–1004 (2019). W. H. Slade, E. Boss, and C. Russo, "Effects of particle aggregation and disaggregation on their inherent optical properties," Opt. Express 19(9), 7945–7959 (2011). L. Hu, X. Zhang, and M. J. Perry, "Light scattering by pure seawater at subzero temperatures," Deep Sea Res., Part I 162, 103306 (2020). R. Röttgers, D. McKee, and C. Utschig, "Temperature and salinity correction coefficients for light absorption by water in the visible to infrared spectral region," Opt. Express 22(21), 25093–25108 (2014). R. Röttgers, D. McKee, and S. B. Woźniak, "Evaluation of scatter corrections for ac-9 absorption measurements in coastal waters," Methods Oceanogr. 7, 21–39 (2013). Ackleson, S. G. Agrawal, Y. Agrawal, Y. C. Alley, D. Behrenfeld, M. Bi, L. Blocker, E. R. Blondeau-Patissier, D. Boss, E. Boss, E. S. Brando, V. E. Brock, R. S. Chami, M. Chant, R. J. Cochenour, B. Craig, S. E. Dall'Olmo, G. Dekker, A. G. Dunn, K. Erga, S. R. Forand, J. L. Fournier, G. R. Frette, Ø. Gardner, W. Georgopoulos, D. Gower, J. F. Gray, D. Gray, D. J. Gregg, W. W. Hamre, B. Harmel, T. He, M.-X. Hieronymi, M. Hill, P. S. Hu, L. Hu, X.-H. Huot, Y. Ivanov, C. D. Jacobs, K. M. Jago, C. F. Jones, S. E. Karageorgis, A. Kasarova, S. N. Koestner, D. Kristoffersen, A. S. Krivtsov, V. Laux, A. Lee, Z. Lefering, I. Lewis, M. Lotsberg, J. K. Lu, J. Q. Ma, X. Mackowski, D. W. Maritorena, S. Marken, E. McKee, D. McKinna, L. I. Mie, G. Mikkelsen, O. Mikkelsen, O. A. Milligan, T. G. Mishchenko, M. I. Mitchelson-Jacob, G. Mullen, L. Mullen, L. J. Nikolov, I. D. O'Connor, S. P. Perry, M. J. Petzold, T. J. Phinn, S. R. Pottsmith, H. Reynolds, R. A. Roesler, C. S. Röttgers, R. Roullier, F. Rousseaux, C. S. Russo, C. Sahoo, R. Sahu, S. K. Shanmugam, P. Slade, W. Slade, W. H. Ssebiyonga, N. Stamnes, J. J. Stockley, N. D. Stramski, D. Styles, R. Sullivan, J. M. Sultanova, N. G. Travis, L. D. Twardowski, M. Twardowski, M. S. Tyler, J. E. Tzortziou, M. Utschig, C. Velaoras, D. Werdell, P. J. Wozniak, S. B. Xiong, Y. Yang, P. You, Y. Ann. Phys. (1) Appl. Sci. (1) Deep Sea Res., Part I (1) Geophys. Res. Lett. (1) J. Mar. Res. (1) J. Quant. Spectrosc. Radiat. Transfer (1) Limnol. Oceanogr. (2) Limnol. Oceanogr.: Methods (1) Mar. Geol. (2) Medit. Mar. Sci. (1) Methods Oceanogr. (1) Opt. Express (10) Opt. Mater. (1) Phys. Med. Biol. (1) Prog. Oceanogr. (2) Fig. 10. (1) β ( θ ) = d I ( θ ) E d V [ m − 1 sr − 1 ] , (2) b = 2 π ∫ 0 π β ( θ ) sin ⁡ θ d θ . (3) b b = 2 π ∫ π / 2 π β ( θ ) sin ⁡ θ d θ , (4) g = ⟨ cos ⁡ θ ⟩ = 2 π b ∫ 0 π β ( θ ) cos ⁡ θ sin ⁡ θ d θ . (5) β corr ( θ ) = c Mie c meas β meas ( θ ) . (6) cscat = scat / τ − zscat . (7) β i , p ( θ ) = P i , p P 0 ⋅ C i 2 π ϕ ( cos ⁡ θ i + 1 − cos ⁡ θ i ) L , (8) β eyeball ( θ ) = κ ( θ , V 0 ) ( V 0 V ) γ P 11 uncal ( θ , V ) . (9) β true = A β meas , (10) β true = a β meas + b , (11) β m = β p + β w ( T , S ) + β L . (12) β BG = β w ( T BG , 0 ) + β L . (13) β p = β m − β BG − β w ( T , S ) + β w ( T BG , 0 ) .
CommonCrawl
[ "article:topic", "license:ccbyncsa", "authorname:anonymous", "winner\u2019s curse", "program:hidden" ] https://socialsci.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fsocialsci.libretexts.org%2FBookshelves%2FEconomics%2FBook%253A_Introduction_to_Economic_Analysis%2F20%253A_Auctions%2F20.05%253A_The_Winner%25E2%2580%2599s_Curse_and_Linkage How do I interpret the bids of others when other bidders may have relevant information about the value of the good? I paid too much for it, but it's worth it. -Sam Goldwyn The analysis so far has been conducted under the restrictive assumption of private values. In most contexts, bidders are not sure of the actual value of the item being sold, and information held by others is relevant to the valuation of the item. If I estimate an antique to be worth $5,000, but no one else is willing to bid more than $1,000, I might revise my estimate of the value down. This revision leads bidders to learn from the auction itself what the item is worth. The early bidders in the sale of oil lease rights in the Gulf of Mexico (the outer continental shelf) were often observed to pay more than the rights were worth. This phenomenon came to be known as the winner's curse. The winner's curse is the fact that the bidder who most overestimates the value of the object wins the bidding. Naïve bidders who don't adjust for the winner's curse tend to lose money because they win the bidding only when they've bid too high. Figure 20.1 Normally Distributed Estimates Figure 20.1 "Normally Distributed Estimates". The estimates are correct on average, which is represented by the fact that the distribution is centered on the true value v. Thus, a randomly chosen bidder will have an estimate that is too high as often as it is too low, and the average estimate of a randomly selected bidder will be correct. However, the winner of an auction will tend to be the bidder with the highest estimate, not a randomly chosen bidder. The highest of five bidders will have an estimate that is too large 97% of the time. The only way the highest estimate is not too large is if all the estimates are below the true value. With 10 bidders, the highest estimate is larger than the true value with probability 99.9% because the odds that all the estimates are less than the true value is \(\begin{equation}(1 / 2)^{10}=0.1 \%\end{equation}\). This phenomenon—that auctions tend to select the bidder with the highest estimate, and the highest estimate is larger than the true value most of the time—is characteristic of the winner's curse. A savvy bidder corrects for the winner's curse. Such a correction is actually quite straightforward when a few facts are available, and here a simplified presentation is given. Suppose there are n bidders for a common value good, and the bidders receive normally distributed estimates that are correct on average. Let σ be the standard deviation of the estimates.The standard deviation is a measure of the dispersion of the distribution and is the square root of the average of the square of the difference of the random value and its mean. The estimates are also assumed to be independently distributed around the true value. Note that estimating the mean adds an additional layer of complexity. Finally, suppose that no prior information is given about the likely value of the good. In this case, it is a straightforward matter to compute a correction for the winner's curse. Because the winning bidder will generally be the bidder with the highest estimate of value, the winner's curse correction should be the expected amount by which the highest value exceeds the average value. This can be looked up in a table for the normal distribution. The values are given for selected numbers n in Table 20.1 "Winner's Curse Correction". This table shows, as a function of the number of bidders, how much each bidder should reduce his estimate of value to correct for the fact that auctions select optimistic bidders. The units are standard deviations. Table 20.1 Winner's Curse Correction WCC (σ) 0 .56 .85 1.03 1.16 1.54 1.74 n 20 25 50 100 500 1000 10,000 WCC (σ) 1.87 1.97 2.25 2.51 3.04 3.24 3.85 For example, with one bidder, there is no correction because it was supposed that the estimates are right on average. With two bidders, the winner's curse correction is the amount that the higher of two will be above the mean, which turns out to be 0.56σ, a little more than half a standard deviation. This is the amount that should be subtracted from the estimate to ensure that, when the bidder wins, the estimated value is correct, on average. With four bidders, the highest is a bit over a whole standard deviation. As is apparent from the table, the winner's curse correction increases relatively slowly after 10 or 15 bidders. With a million bidders, it is 4.86σ. The standard deviation σ measures how much randomness or noise there is in the estimates. It is a measure of the average difference between the true value and the estimated value, and thus the average level of error. Oil companies know from their history of estimation how much error arises in the company estimates. Thus, they can correct their estimates to account for the winner's curse using their historical inaccuracies. Bidders who are imperfectly informed about the value of an item for sale are subject to losses arising from the way auctions select the winning bidder. The winning bidder is usually the bidder with the highest estimate, and that estimate is too high on average. The difference between the highest estimate and the average estimate is known as the winner's curse correction. The size of the winner's curse correction is larger the more bidders there are, but it tends to grow slowly beyond a dozen or so bidders. If the bidders have the same information on a common value item, they will generally not earn profits on it. Indeed, there is a general principle that it is the privacy of information, rather than the accuracy of information, that leads to profits. Bidders earn profits on the information that they hold that is not available to others. Information held by others will be built into the bid price and therefore not lead to profits. The U.S. Department of the Interior, when selling offshore oil leases, not only takes an up-front payment (the winning bid) but also takes one-sixth of the oil that is eventually pumped. Such a royalty scheme links the payment made to the outcome and, in a way, shares risk because the payment is higher when there is more oil. Similarly, a book contract provides an author with an upfront payment and a royalty. Many U.S. Department of Defense (DOD) purchases of major weapons systems involve cost-sharing, where the payments made pick up a portion of the cost. Purchases of ships, for example, generally involve 50%–70% cost sharing, which means the DOD pays a portion of cost overruns. The contract for U.S. television broadcast rights for the Summer Olympics in Seoul, South Korea, involved payments that depended on the size of the U.S. audience. Royalties, cost-sharing, and contingent payments generally link the actual payment to the actual value, which is unknown at the time of the auction. Linkage shares risk, but linkage does something else, too. Linkage reduces the importance of estimates in the auction, replacing the estimates with actual values. That is, the price a bidder pays for an object, when fully linked to the true value, is just the true value. Thus, linkage reduces the importance of estimation in the auction by taking the price out of the bidder's hands, at least partially. The linkage principleThe linkage principle, and much of modern auction theory, was developed by Paul Milgrom (1948–). states that in auctions where bidders are buyers, the expected price rises the more the price is linked to the actual value. (In a parallel fashion, the expected price in an auction where bidders are selling falls.) Thus, linking price to value generally improves the performance of auctions. While this is a mathematically deep result, an extreme case is straightforward to understand. Suppose the government is purchasing by auction a contract for delivery of 10,000 gallons of gasoline each week for the next year. Suppliers face risk in the form of gasoline prices; if the government buys at a fixed price, the suppliers' bids will build in a cushion to compensate for the risk and for the winner's curse. In addition, because their estimates of future oil prices will generally vary, they will earn profits based on their private information about the value. In contrast, if the government buys only delivery and then pays for the cost of the gasoline, whatever it might be, any profits that the bidders earned based on their ability to estimate gasoline prices evaporate. The overall profit level of bidders falls, and the overall cost of the gasoline supply can fall. Of course, paying the cost of the gasoline reduces the incentive of the supplier to shop around for the best price, and that agency incentive effect must be balanced against the reduction in bidder profits from the auction to select a supplier. Auctions, by their nature, select optimistic bidders. This phenomenon—that auctions tend to select the bidder with the highest estimate, and the highest estimate is larger than the true value most of the time—is known as the winner's curse. A savvy bidder corrects for the winner's curse. The size of the winner's curse correction is larger the more bidders there are, but it tends to grow slowly beyond a dozen or so bidders. There is a general principle that it is the privacy of information, rather than the accuracy of information, that leads to profits. Information held by others will be built into the bid price and therefore not lead to profits. The linkage principle states that in auctions where bidders are buyers, the expected price rises the more the price is linked to the actual value. Examples of linkage include English and Vickrey auctions, which link the price to the second bidder's information, and the use of royalties or cost shares. 20.6: Auction Design winner's curse
CommonCrawl
Increased cardiorespiratory synchronization evoked by a breath controller based on heartbeat detection Yumiao Ren1,2 & Jianbao Zhang1 The cardiovascular and respiratory systems are functionally related to each other, but the underlying physiologic mechanism of cardiorespiratory coupling (CRC) is unclear. Cardiopulmonary phase synchronization is a form of cardiorespiratory coupling. However, it is difficult to study in experimental data which are very often inherently nonstationary and thus contain only quasiperiodic oscillations. So how to enhance cardiopulmonary synchronization and quantify cardiopulmonary synchronization, the changes in cardiac function under the conditions of cardiopulmonary synchronization, and the physiological mechanisms behind them are the main issues to be discussed in this paper. The results showed that the cardiorespiratory synchronization significantly increased when breathing was controlled by heartbeat detection (p < 0.001). And the respiratory sinus arrhythmia (RSA) obviously decreased (p < 0.01) in the 2/2 mode and increased (p < 0.001) in the 4/4 mode. During the 2/2 breathing pattern compared with spontaneous breathing, systolic blood pressure (SBP) decreased (p < 0.05), and diastolic blood pressure (DBP), mean arterial blood pressure (MBP), and SV decreased significantly (p < 0.01). During the 4/4 breathing pattern compared to 2/2 breathing patterns, DBP, MBP, and cardiac output (CO) increased (p < 0.05), and stroke volume (SV) increased significantly (p < 0.01). When analyzing the relationships among these parameters, the RSA was found to be associated with the respiration rate in all respiratory patterns. We demonstrated that voluntary cardiorespiratory synchronization (VCRS) can effectively enhance cardiopulmonary phase synchronization, but cardiopulmonary phase synchronization and RSA represent different aspects of the cardiorespiratory interaction. It is found that cardiac function parameters such as the blood pressure and output per stroke could be affected by the number of heartbeats contained in the exhalation and inspiratory phase regulated through VCRS. So we can study cardiopulmonary phase synchronization by VCRS. It can be used to study in experimental data for the physiological mechanism of cardiopulmonary coupling. It is well known that the heart and lung control systems are coupled with each other. Cardiac and respiratory rhythms in humans are synchronized. However, the underlying physiologic mechanism of cardiopulmonary synchronization is unclear. Research on cardiopulmonary synchronization has a certain guiding significance for the diagnosis and treatment of some diseases and for healthcare. Studies have shown that the cardiopulmonary synchronization of athletes and swimmers is superior to that of ordinary people [1, 2], and cardiopulmonary synchronization during the practice meditation is enhanced compare to that during natural breathing [3, 4]. These results seem to suggest that cardiopulmonary enhancement represents a better physiological state. So how to enhance cardiopulmonary synchronization and quantify cardiopulmonary synchronization, the changes in cardiac function under the conditions of cardiopulmonary synchronization, and the physiological mechanisms behind them are the main issues to be discussed in this paper. The interaction between the cardiac and the respiratory systems is traditionally identified through the respiratory sinus arrhythmia (RSA), which accounts for the periodic variation of the heart rate within a breathing cycle. With the development of nonlinear dynamics, phase-synchronization analysis technology was used to analyze the relationships in cardiorespiratory coupling [1, 5,6,7,8]. More recently, phase synchronization between heartbeat and breathing has been studied using the synchrogram method [9], which first applied to check the different synchronous states and conversion rates of cardiorespiratory coupling by Schafer C [2]. Hoyer D improved the detection method of phase synchronization [10]. At present, the cardiorespiratory synchrogram method has been widely used in cardiopulmonary coupling research [11,12,13]. In this paper, we investigated the physiological mechanism of cardiopulmonary coupling based on enhancing cardiopulmonary synchronicity using the heartbeat to control breath. The only method of controlling respiratory-induced variation before 1964 was for subjects to hold their breath. In 1964, Schmitt was first put forward controlling breathing using synchronization with an electrocardiogram/vector cardiogram (ECG/VCG), which was called voluntary cardiorespiratory synchronization (VCRS) [14]. VCRS has been applied to study the changes in heart rate variability with human age and body position [15], to investigate respiratory effects on stroke volume [16], and to study the influence of respiration on changes in blood pressure and heart rate [17]. However, the enhancement of cardiopulmonary synchrony by VCRS has not been quantified and verified, and it is not discussed whether cardiopulmonary synchronization time is the influencing factor of cardiac function. These are important to further reveal the cardiopulmonary coupling mechanism. Our aim is to use VCRS to study respiratory effects on blood pressure, heart rate, stroke volume, and cardiac output, using the phase-synchronization technique to quantify cardiopulmonary synchronization time. On the basis of synchronicity enhancement, we analyzed the effects of cardiopulmonary synchronicity on heart function and discussed the underlying physiologic mechanism of cardiorespiratory coupling. Thirty healthy male subjects (19–27 years old) voluntarily participated in the study. Each subject was given a medical survey questionnaire to ensure that they had no respiratory and cardiovascular diseases. The investigation was performed with the approval of the Xi'an Jiaotong University Ethics Committee, and all subjects signed an approved informed consent after the study procedures had been explained. Experimental protocol The experiment was performed in a quiet laboratory between 8 p.m. and 10 p.m. and the temperature was controlled to 22–24 °C. The subjects were asked to avoid alcohol, tea, coffee, and strenuous exercises for 12 h and to get enough sleep the night before the experiment. Participants were seated comfortably. Before measurements were taken, participants rested and remained calm for 10 min. Then, the subjects received instructions for breathing in the sitting position. Three sets of data were tested with a rest period of 10 min between each set of measurements. The first measurement was made with subjects breathing spontaneously for 6 min. The subjects then exercised for 2 to 4 min to breath pace the VCRS, and the researchers made sure there were no problems during the procedure. The researcher was asked to follow the paced breathing VCRS mode for 6 min using a sound pattern from the ECG. This breath controller was developed in our laboratory. It outputs a sound signal by detecting the heartbeat signal and counting the heartbeats. The subject was instructed by the sound to breathe, synchronized with his/her heartbeat. The device signaled the subject when to inhale and exhale based on a fixed number of heart beats for each phase of the respiratory cycle: for example, inspire for the first two heartbeats and expire for the next two heartbeats in a 2/2 pattern. The 2/2 pattern was selected as the second measurement mode for 6 min. Figure 1a shows the ECG and respiration signals of a subject using this 2/2 breathing pattern. Another measurement mode was the 4/4 paced breathing VCRS pattern for 6 min. Figure 1b shows an example of this 4/4 breathing pattern. The 2/2 and 4/4 breathing patterns were chosen to increase and decrease the respiration rate, respectively, but still be subjectively comfortable for the participants. The relationship between ECG and respiration signals of a subject using the 2/2 and 4/4 breathing patterns. Above is the ECG signal, and below is breathing signal. a ECG and respiration signals of a subject in the voluntary cardiorespiratory synchronization 2/2 mode, during which the subject inspired for two heartbeats and expired for two heartbeats in one respiration cycle. b ECG and respiration signals of a subject in the voluntary cardiorespiratory synchronization 4/4 mode, during which the subject inspired for four heartbeats and expired for four heartbeats. The top trace shows the ECG; the bottom trace shows respiration measured by a respiration belt Respiration, ECG, and thoracic electric bioimpedance (TEB) signals were acquired online using a multichannel physiology recorder (MP150, Biopac, USA) and software (AcqKnowledge 4.2, Biopac systems) on a PC (Gateway) and collected at 1000 samples per second. Beat-to-beat arterial blood pressure was logged continuously via a noninvasive finger photoplethysmography (FMS, Finapres Measurement Systems, Arnhem, Netherlands). The synchronization method between the two instruments involves attaching the output interface (BNC) of the FMS to an analog input channel of the MP150. The ECG was measured using disposable self-adhesive Ag/AgCl ECG electrodes, which were placed on the subject's right arm, left leg, and right leg. The connection method was a standard bipolar ECG lead II that the left leg was connected to during the in-phase input of the amplifier, with the right arm connected to reverse-phase input, and the right leg linked to ground wire. The TEB was recorded using NICO100C (Biopac system, Inc.). The NICO100C noninvasive cardiac output amplifier records the parameters associated with cardiac output measurements. It incorporates a precision high-frequency current source, which injects a very small (400 μA) measurement current through the thoracic volume, which is defined by the placement of a set of current source electrodes. Respiration was recorded with a respiratory effort transducer strain assembly belt TSD201 (Biopac Inc.) that measures thoracic expansion and contraction. Voluntary cardiorespiratory synchronized breathing (VCRS) was controlled with the device that generated a sound to instruct the subject when to inhale and exhale, which was introduced above. Phase synchronization and synchrogram Phase synchronization is a kind of cooperative performance between two weakly interacted oscillators. It is classically understood as phase locking and defined as [18]: $$\varphi_{n,m} = \left| {n\varPhi_{1} - m\varPhi_{2} - \delta } \right| < {\text{const}},$$ where \(n\) and \(m\) are integers, \(\varPhi_{1}\) and \(\varPhi_{2}\) are phases of the two oscillators. The \(n:m\) phase locking manifests as a variation of \(\varphi_{n,m}\) around a horizontal plateau. The synchrogram is a visualization tool which enables the detection of synchronization epochs in bivariate data. The synchrogram is constructed by plotting the corresponding normalized respiratory phase \(\psi_{m} (t_{k} )\) for every heart beat within m respiratory cycles [19]: $$\psi_{m} (t_{k} ) = \frac{1}{2\pi }\left( {\varPhi_{r} \left( {t_{k} } \right)\bmod 2m\pi } \right),$$ where \(t_{k}\) is the time of the \(k\) th R peak and \(\varPhi_{r}\) is the corresponding respiratory phase. The respiratory phase, \(\varPhi_{r} \left( {t_{k} } \right)\), is calculated by the method based on the marker events as the following formula: $$\varPhi_{r} \left( {t_{k} } \right) = 2\pi \frac{{t - t_{k} }}{{t_{k + 1} - t_{k} }} + 2\pi k,\quad t_{k} \le t < t_{k + 1},$$ where \(t_{k}\) is the time of the onset of \(k\) th expiration. The local maximum of respiratory signal is taken for the marker events of respiratory oscillator. In a perfect \(m:n\) phase locking, \(\psi_{m} \left( {t_{k} } \right)\) exactly attains the same \(n\) different values within \(m\) adjacent respiratory cycles, and the synchrogram consists of \(n\) horizontal strips. The method of phase recurrences based on a heuristic approach was used to quantify the cardiorespiratory synchrogram. Detection with phase recurrences offered the best temporal resolution and the highest number of synchronized sequence [20]. Generally, a \(n:m\) synchronization will be identified if the difference between the normalized relative respiratory phase corresponding to the \(\left( {i + n} \right)\) th R peak and the one corresponding to the \(i\) th R peak is within a defined tolerance ε. This condition has to be fulfilled for at least \(k\) successive R peaks: $$\begin{aligned} & \exists k > 1,\quad \left| {\psi_{\text{m}} \left( {t_{i + n} } \right) - \psi_{\text{m}} \left( {t_{n} } \right)} \right| < \varepsilon , \\ & \quad i \in \left\{ {l, \ldots ,\;l + k - 1, \;0 \le l \le N_{r} - k + 1} \right\} \\ \end{aligned},$$ where \(N_{r}\) is the total number of R peaks. To be compatible with the description of parallel horizontal lines during synchronization, \(k \ge m\) needs to be fulfilled. This procedure allows a detection of a structure of parallel horizontal lines with a length of 2 m successive normalized relative phases. It is most valuable in cases where one of the signals resembles a point process. The synchrogram is a stroboscopic view of the phase of the respiration signal at the times of R waves. R peaks were detected using the combination of wavelet transforms and thresholding methods [21]. The peaks and troughs at breathing signal were extracted using the algorithm of threshold value. The times of R peaks at ECG and inspiratory onsets at respiratory signal were obtained and served as event markers for synchrogram, as shown in Fig. 3. The duration of the synchronization epochs is calculated, which is used as the index of synchronization strength in the article. Respiratory sinus arrhythmia (RSA) Respiratory Sinus Arrhythmia is used to explore the connection between respiration and changes to heart rate. This RSA index is computed using the peak-valley method [22]. This method uses both a recorded ECG Lead II signal and a respiration signal. The respiration signal is used to locate periods of inhalation and exhalation. Inhalation begins at valleys in the signal while expiration at peaks. RSA index is the average difference between the highest and lowest heart rate (HR) during each respiratory cycle (HR Max − HR Min) [23], expressed in milliseconds. Arterial blood pressure The peaks and troughs of the blood pressure signal were extracted for systolic and diastolic blood pressures by combining them with the ECG signal. The mean arterial blood pressure (MAP) (= (2 × diastolic)/3 + systolic/3) was calculated from systolic and diastolic blood pressures. The method of extracting feature points in the blood pressure signal incorporated into the ECG signal is an advantage of synchronous collection of blood pressure and ECG signals. The maximum and minimum of the blood pressure signal matched the cardiac cycle between two adjacent R wave peaks. Each cardiac cycle includes one systolic blood pressure and one diastolic blood pressure. The algorithm has a high detection rate, as shown in Fig. 2. The peaks and troughs in the blood pressure signals of a subject. Systolic blood pressure (SBP) calculated by the peak is marked with cross symbols. Diastolic blood pressure (DBP) calculated by the trough is marked with circle Stroke volume and cardiac output The stroke volume (SV) and cardiac output (CO) can be derived from the popular noninvasive method of impedance cardiography. CO is defined as the volume of blood pumped each minute (\(= {\text{SV}} \times {\text{HR}}\)). SV can be calculated from the Kubicek formula [16]: $${\text{SV}} = \rho \frac{{L^{2} }}{{Z_{0}^{2} }}\left( {\frac{{{\text{d}}Z}}{{{\text{d}}t}}} \right)_{\text{max} } \times {\text{LVET}},$$ where \({\text{SV}}\) is stroke volume, i.e., volume of blood pumped by left ventricle in a single beat (mL beat), \(\rho\) is blood resistivity at 100 kHz (typical value for a normal hematocrit = 150 Ω cm), L is mean distance between inner pair of electrodes (cm), \(Z_{0}\) is the average basal thoracic impedance (Ω), LVET is left ventricular ejection time(s), \(\left( {{\text{d}}Z/{\text{d}}t} \right)_{\text{max} }\) is the maximum value of (\({\text{d}}Z/{\text{d}}t\)) signal (Ω s−1), and (\({\text{d}}Z/{\text{d}}t\)) is the derivative of cardiac systolic impedance. Statistical analyses were performed with SigmaPlot software (Systat Software, Inc. USA). If the data were normally distributed, paired t tests were used. Otherwise, rank sum tests were used. A p value < 0.05 was statistically considered significant, and data were represented as the mean ± SEM. Pearson's correlation coefficients were used for the correlation analyses. Changes in the cardiorespiratory synchronization time (Syn), RSA, and breath rate (BR) Compared with spontaneous breathing, Syn was significantly increased (p < 0.001 for the 2/2 breathing pattern, p < 0.01 for the 4/4 breathing pattern). RSA obviously decreased (p < 0.01) in the 2/2 mode and increased (p < 0.001) in the 4/4 mode. In contrast, the BR was significantly increased (p < 0.001) during the 2/2 breathing pattern, and it was significantly decreased (p < 0.001) during the 4/4 breathing pattern (Table 1). Table 1 Syn and BR during spontaneous breathing and 2/2 and 4/4 breathing patterns The results showed that cardiorespiratory synchronization was significantly increased by breath control based on heartbeat detection. A typical cardiorespiratory synchrogram of one subject is shown in Fig. 3. During spontaneous breathing, synchronization was found in 26 subjects, and 4 subjects did not exhibit any synchronization. The average synchronization epoch was 19.79 s in 360 s, which was 5.50% of the total recordings. During the 2/2 breathing pattern, synchronization was found in 29 subjects, and 1 subject did not exhibit any synchronization. The mean synchronization epoch was 84.43 s in 360 s, which indicated that heartbeat and breath rate were synchronized in 23.45% of the total recording; this result was far longer than that of spontaneous breathing. During the 4/4 breathing pattern, the average synchronization epoch in all 24 subjects was 60.85 s in 360 s, which was 16.90% of the total recording; this result was far longer than that of spontaneous breathing. Furthermore, the breath rate was in accordance with the breathing controller based on heartbeat detection. These results suggest that the levels of cardiorespiratory coupling were significantly higher in the 2/2 and 4/4 breathing modes than during spontaneous breathing. Typical cardiorespiratory synchrogram of one subject. Typical cardiorespiratory synchrograms of one subject for spontaneous breathing (a), the 2/2 breathing pattern (b), and the 4/4 breathing pattern (c). Synchronization, marked with cross symbols, is characterized by the arrangement of the wrapped phase in horizontal lines. In this subject, 4:1 phase synchronization was detected for both 2/2 breathing pattern and spontaneous breathing, but the synchronization epoch in 2/2 breathing pattern obviously longer. 8:1 phase synchronization was detected for the 4/4 breathing pattern Effect on blood pressure, HR, SV, and CO To determine the effect of breath triggered by ECG enhancing cardiopulmonary synchronicity on the cardiovascular system, we analyzed cardiovascular function parameters during the 2/2 and 4/4 breathing patterns relative to those during spontaneous breathing pattern and between the 2/2 and 4/4 breathing patterns (Table 2). These cardiovascular function parameters include blood pressure (SBP, DBP, and MBP), heart rate (HR), stroke volume (SV), and cardiac output (CO). During the 2/2 breathing pattern compared with spontaneous breathing, SBP decreased (p < 0.05), and DBP, MBP, and SV decreased significantly (p < 0.01). During the 4/4 breathing pattern compared with spontaneous breathing, all of the parameters showed no obvious change. During the 4/4 breathing pattern compared to the 2/2 breathing pattern, SBP, DBP, MBP, and CO increased (p < 0.05), and SV increased significantly (p < 0.01); no obvious difference was found in SBP and HR. Table 2 Cardiac function parameters during spontaneous breathing and the 2/2 and 4/4 breathing patterns Correlation among Syn, RSA, BP, and breath rate To study the mechanism of cardiopulmonary coupling, we analyzed the correlation coefficients among the cardiopulmonary synchronization time and blood pressure, cardiac output, stroke output, and respiration rate. No significant correlation was found. When analyzing the correlation between RSA and these parameters, RSA was found to be associated with the respiration rate and BP in the 2/2 mode (Fig. 4). RSA was significantly negatively correlated with the respiration rate in all the respiratory mode (Spt, r = − 0.535, p = 0.002; 2/2 mode, r = − 0.741, p = 0.000003; r = − 0.757, p = 0.000001). RSA was obviously negatively correlated with DBP and MBP in the 2/2 respiratory mode (r = − 0.551, p = 0.002; r = − 0.468, p = 0.009). No obvious correlation was found in spontaneous breathing and 4/4 breath pattern. Correlation analysis of the power spectral energy of HRV and breath rate. Scatterplots for RSA and respiratory rate during spontaneous breathing (Spt), the 2/2 breathing pattern (2/2), and the 4/4 breathing pattern (4/4) (a). Scatterplots for RSA and DBP during spontaneous breathing (Spt), the 2/2 breathing pattern (2/2), and the 4/4 breathing pattern (4/4) (b). Scatterplots for RSA and MBP during spontaneous breathing and the 2/2 and 4/4 breathing modes (c) Phase synchronization can be used to analyze cardiopulmonary coupling [1, 2, 11]. We analyzed the synchronization time of the heart and respiration in 2/2, 4/4, and spontaneous breathing patterns by using phase synchronous graphs and found that controlling respiration by heartbeat can significantly enhance cardiopulmonary coupling and control cardiopulmonary synchronization. Compared with spontaneous breathing, the synchronous time of the heart rate and respiration in 2/2 and 4/4 modes was significantly enhanced, the respiratory rate increased significantly during the 2/2 pattern, and the respiratory rate decreased significantly during the 4/4 pattern (Table 1). In addition, compared to the 2/2 breathing mode, the breathing rate was significantly reduced in the 4/4 breathing mode, but the synchronization time did not change significantly. These results show that the synchronization time is inconsistent with the change in respiration rate, which is agreement with Ref. [24]. So the enhancement of synchronization time is not caused by the change of respiratory rate. At the same time, the method of adjusting phase synchronization by VCRS is to control the initial phase and the ratio of heartbeat to respiratory cycle through heartbeat, which is important for enhancing phase-synchronization time [11]. This showed that VCRS can effectively enhance cardiopulmonary synchronization and control the number of heartbeats contained in the exhalation and inspiratory phase of the respiratory cycle. It provides experimental conditions for studying the physiological mechanism of cardiopulmonary coupling. In this paper, we used VCRS to study the circulatory system and found that blood pressure and stroke volume were reduced in the 2/2 mode by VCRS. Through analysis, we think that this is related to the number of heartbeats contained in the exhalation and inspiratory phase regulated by VCRS. Compared with spontaneous breathing, the 2/2 respiratory pattern resulted in decreased blood pressure and stroke volume (Table 2), and there were no significant changes in mean HR because vagal tone did not change [23]. During the 2/2 respiratory pattern, the ratio of inhalation time to exhalation time of the respiratory cycle was approximately 1:1, and both the inhalation and exhalation phases contained two ECG cycles. In spontaneous breathing mode, the inhalation phase contained two ECG cycles, and the exhalation phase contained three ECG cycles [25], so the ratio of inhalation time to exhalation time was approximately 1:1.5. Compared with spontaneous breathing, the duration of inhalation increased in the whole respiratory cycle in the 2/2 breath mode. When inhaling, the decrease in pleural pressure resulted in increased right ventricular filling, which decreased left ventricular filling and stroke volume and further reduced systolic blood pressure. Exhalation has the opposite effect and was accompanied by a delayed effect of the increased right ventricular stroke volume due to inhalation. The maximum left ventricular stroke volume occurred during the posterior half of the exhalation phase. When the breathing amplitude increased, namely, the amount of the moisture increased, the magnitude of change in pleural pressure increased, and the effect became more pronounced [26, 27]. Therefore, the blood pressure and stroke output decreased in the 2/2 mode relative to those during spontaneous respiration. The maximum left ventricular stroke volume occurred during the latter half of the exhalation phase, although the time ratio of the inhalation phase to the exhalation phase was 1:1 in the 4/4 mode, the exhalation time had 4 ECG cycles. The exhalation time was sufficient compared to that of the 2/2 mode and showed little difference compared to that of spontaneous breath. Therefore, in the 4/4 mode, blood pressure, stroke, and cardiac output increased compared to the 2/2 mode, and there was no significant change compared to spontaneous breath. It indicates that blood pressure and output per stroke could affected by changes in chest pressure. In our study, it showed that the decrease in blood pressure and output per stroke is related to the mode by VCRS. Our results demonstrate that cardiopulmonary phase synchronization and the traditionally studied respiratory sinus arrhythmia represent different aspects of the cardiorespiratory interaction. This is consent to previous studies [13, 24, 28]. In this study, the strength of phase synchronization is represented by phase-synchronization time, and the strength of RSA is represented by HR Max − HR Min. We investigated the relationship between phase synchronization and RSA. Our analyses did not reveal a statistical relation between the degree of cardiopulmonary phase synchronization and the strength of RSA. RSA was affected by respiration rate (Fig. 4a). However, cardiopulmonary phase-synchronization time was not correlated with respiration rate. And the changes of cardiopulmonary phase-synchronization time are different with RSA in all breath patterns (Table 1). These differences are due to the fact that, whereas RSA is a measure of the amplitude of variation of the heartbeat intervals within the breathing cycles, phase synchronization is characterized by the clustering of heartbeats at specific phases in the breathing cycle. This clustering is independent of the amplitude of heart rate modulation [24]. Cardiorespiratory phase synchronization is a type of cardiorespiratory coupling that manifests through a predilection for heart beats to occur at specific points relative to the phase of the respiratory cycle [29], which can be controlled by VCRS. However, the RSA is quite obviously another manifestation of the cardiorespiratory interaction [30]. In our study, RSA was significantly changed in all respiration modes, which was correlated with breath frequency and BP (Fig. 4). However, it is related to blood pressure only in 2/2 breathing mode. It is probable that RSA is mainly affected by breath frequency. Cardiopulmonary phase synchronization and RSA are different forms of the cardiorespiratory interaction, which can be cooperatively or independently used in cardiopulmonary coupling studies. Our study demonstrates that VCRS can effectively enhance cardiopulmonary phase synchronization, although phase synchronization is difficult to study in experimental data which are very often inherently nonstationary and thus contain only quasiperiodic oscillations. We can study cardiopulmonary phase synchronization by VCRS. In our experimental data, we found that 2/2 mode can lower blood pressure, and we think that this is related to the number of heartbeats contained in the exhalation and inspiratory phase regulated by VCRS. It may be possible to control the cardiovascular parameters by controlling respiratory rate and the number of heartbeats contained in the exhalation and inspiratory phase of the respiratory cycle through VCRS, thereby regulating cardiac function. The datasets used during the current study are available from the corresponding author on reasonable request. CRC: cardiorespiratory coupling RSA: respiratory sinus arrhythmia MBP: mean arterial blood pressure SV: stroke volume VCRS: voluntary cardiorespiratory synchronization ECG/VCG: electrocardiogram/vector cardiogram HR: LVET: left ventricular ejection time cardiorespiratory synchronization time breath rate Spt: spontaneous breathing Schäfer C, et al. Heartbeat synchronized with ventilation. Nature. 1998;392(6673):239. Schäfer C, et al. Synchronization in the human cardiorespiratory system. Phys Rev E. 1999;60(1):857–70. Wu SD, Lo PC. Cardiorespiratory phase synchronization during normal rest and inward-attention meditation. Int J Cardiol. 2010;141(3):325–8. Büssing A, Matthiessen PF, Cysarz D. Cardiorespiratory synchronization during Zen meditation. Eur J Appl Physiol. 2005;10(Supplement s1):10–1. Zhang D, et al. Effects of acute hypoxia on heart rate variability, sample entropy and cardiorespiratory phase synchronization. BioMed Eng OnLine. 2014;13(1):73. Zhang Q, et al. Cardiovascular and cardiorespiratory phase synchronization in normovolemic and hypovolemic humans. Eur J Appl Physiol. 2015;115(2):417–27. Lucchini M, et al. Characterization of cardiorespiratory phase synchronization and directionality in late premature and full term infants. Physiol Meas. 2018. https://doi.org/10.1088/1361-6579/aac553. SureshkumarRaju S, et al. Darcy-Forchheimer flow and heat transfer augmentation of a viscoelastic fluid over an incessant moving needle in the presence of viscous dissipation. Microsyst Technol. 2019. https://doi.org/10.1007/s00542-019-04340-3. Kuhnhold A, et al. Quantifying cardio-respiratory phase synchronization-a comparison of five methods using ECGs of post-infarction patients. Physiol Meas. 2017;38(5):925. Hoyer D, Hoyer O, Zwiener U. A new approach to uncover dynamic phase coordination and synchronization. IEEE Trans Bio-med Eng. 2000;47(1):68. Bartsch R, et al. Experimental evidence for phase synchronization transitions in the human cardiorespiratory system. Phys Rev Lett. 2007;98(5):054102. Sola-Soler J, et al. Cardiorespiratory Phase Synchronization in OSA subjects during wake and sleep states. Conf Proc IEEE Eng Med Biol Soc. 2015;2015:7708–11. Sola-Soler J, Cuadros A, Giraldo BF. Cardiorespiratory Phase Synchronization increases during certain mental stimuli in healthy subjects. Conf Proc IEEE Eng Med Biol Soc. 2018;2018:5298–301. Patterson RB, Belalcazar A, Pu Y. Voluntary cardio-respiratory synchronization. IEEE Eng Med Biol Mag. 2004;23(6):52–6. Patterson R, Kaiser D. Heart rate change as a function of age, tidal volume and body position when breathing using voluntary cardiorespiratory synchronization. Physiol Meas. 1997;18(3):183. Wang L, Patterson DRP, Raza SB. Respiratory effects on cardiac related impedance indices measured under voluntary cardio-respiratory synchronisation (VCRS). Med Biol Eng Comput. 1991;29(29):505–10. Mason LI, Patterson RP. Determining the relationship of heart rate and blood pressure using voluntary cardio-respiratory synchronization (VCRS). Physiol Meas. 2003;24(4):847. Lotrič MB, Stefanovska A. Synchronization and modulation in the human cardiorespiratory system. Physica A. 2000;283(3):451–61. Zhang J, Yu X, Xie D. Effects of mental tasks on the cardiorespiratory synchronization. Respir Physiol Neurobiol. 2010;170(1):91–5. Cysarz D, et al. A quantitative comparison of different methods to detect cardiorespiratory coordination during night-time sleep. Biomed Eng Online. 2004;3(1):44. Rabbani H, et al. R peak detection in electrocardiogram signal based on an optimal combination of wavelet transform, hilbert transform, and adaptive thresholding. J Med Signals Sens. 2011;1(2):91–8. Grossman P, van Beek J, Wientjes C. A comparison of three quantification methods for estimation of respiratory sinus arrhythmia. Psychophysiology. 1990;27(6):702–14. Shaffer F, Ginsberg JP. An overview of heart rate variability metrics and norms. Front Public Health. 2017;5:258. Bartsch RP, et al. Phase transitions in physiologic coupling. Proc Natl Acad Sci USA. 2012;109(26):10181–6. Almasi JJ, Schmitt OH. Basic technology of voluntary cardiorespiratory synchronization in electrocardiology. IEEE Trans Biomed Eng. 2007;21(4):264–73. Ruskin J, et al. Pressure-flow studies in man: effect of respiration on left ventricular stroke volume. Circulation. 1973;48(1):79. Fontecave JJ, et al. A model of mechanical interactions between heart and lungs. Philos Trans A Math Phys Eng Sci. 1908;2009(367):4741–57. Bartsch RP, et al. Three independent forms of cardio-respiratory coupling: transitions across sleep stages. Comput Cardiol. 2010;2014(41):781–4. Krause H, et al. On the difference of cardiorespiratory synchronisation and coordination. Chaos. 2017;27(9):093933. Sobiech T, et al. Cardiorespiratory coupling in young healthy subjects. Physiol Meas. 2017;38(12):2186–202. Xiaoni Wang, Lin Xie, and Binbin Liu helped collect experimental data and gave advice for experimental design. This work was supported by the National Natural Science Foundation of China (No. 31670954). It mainly includes the construction of experimental platforms and the purchase of disposable experimental consumables. Key Laboratory of Biomedical Information Engineering of Education Ministry, Xi'an Jiaotong University, Xianning West Road, Xi'an, 710049, China Yumiao Ren & Jianbao Zhang School of Electronics and Information Engineering, Xi'an Technological University, Xi'an, 710032, China Search for Yumiao Ren in: Search for Jianbao Zhang in: JZ directs experimental design and the method of data analysis. YR collected and analyzed the data, and was a major contributor in writing the manuscript. Both authors read and approved the final manuscript. Correspondence to Jianbao Zhang. The investigation was performed with the approval of the Xi'an Jiaotong University Ethics Committee. All subjects signed an approved informed consent after the study procedures had been explained and consented for publication. Ren, Y., Zhang, J. Increased cardiorespiratory synchronization evoked by a breath controller based on heartbeat detection. BioMed Eng OnLine 18, 61 (2019) doi:10.1186/s12938-019-0683-9 Cardiopulmonary phase synchronization
CommonCrawl
The Accessible Universe Blog mainly concerning astronomy, physics, and general science. Also serves as support for the writing of a set of notes for General Astronomy to be available online as well as an eBook format. Earth Size Relative Size of the Sun and Earth TAU Chapter 1 --- Scales of Space and Time Increasing size scales, roughly a factor of a billion each step One of the most important ideas to begin to understand is the immense differences in scales and size that we'll be investigating. It's hard to get around the prejudice that limits our imaginations to things we've encountered on the Earth --- to most of us, "small" means something like a sand grain, perhaps a fraction of a millimeter (the smallest marks on a metric ruler or meterstick). "Huge" might bring to mind a mountain or vast expanse of forest, maybe thousands of meters or many miles across (or tall). This somewhat provincial attitude will be strongly challenged by the objects and distances we encounter even in our own Solar System, not to mention the unimaginable vastness of the Universe as a whole. The entire Earth, as we'll see, is a tiny speck floating in a huge expanse; but even the smallest sand grain on the Earth is unfathomably gigantic when viewed from the perspective of its constituent atoms. The astonishing promise is that all these fantastically different scales are all described by the same physical laws, which gives us some hope of understanding the Universe around us. The above images represent a series of increasing size scales typical of objects we'll be studying. Atoms begin our scale on the small end. There are 92 different atoms, or building blocks, out of which all the "ordinary" matter in the Universe is made. Each type, or element, has atoms of different sizes; a rough average size for an atom is about a ten-billionth of a meter. Lining up a billion atoms will just about span the width of a couple of apples. But wait, I've just invoked an enormous number that is quite beyond most of our imaginations already, so have I really told you anything at all? Just how big is a billion? It is a number of increasingly common usage, describing the economic costs of massive projects as well as the populations of the largest countries. It is written as a 1 followed by 9 zeroes: 1,000,000,000. As a shorthand, we write it also as \( 10^9 \), indicating that it is 10 multiplied by itself 9 times. If you don't have some way of imagining a billion things, though, it's difficult to make any sense out of the relative scales of thousands, millions, and billions. These days, such a sense is needed to evaluate national-level programs. Is a $10 billion project a lot more expensive than a $500 million one? (Yes! 20 times the cost.) If we can save $70 million from a proposal costing $7 billion, what percentage savings is that, really? (Only about 1%!) And so on. A thousand little boxes First, let's try to visualize a thousand things (1,000, or \(10^3\)): The bottom of this cube is a square made of 10 rows of 10 boxes in each row, so there are 100 boxes on the bottom layer. To make the cube, we stack 10 of these layers on top of each other, so you should convince yourself that there are a thousand boxes in the cube. A million little boxes Now let's step up to a million things (1,000,000, or \(10^6\)). You should see, especially if you look at the full-resolution image, that each little box has itself been subdivided into a thousand still smaller little boxes. There are now a thousand groups of a thousand things, which is a million. Ok, now on to a billion! Continuing the pattern, if you can somehow imagine each of the tiny boxes above subdivided into a thousand still tinier boxes, then there will be a billion little boxes in the big cube. So a billion is a thousand groups of a million. One fairly easy way to try to visualize a billion is to consider an ordinary meterstick. The smallest marks are millimeters, so there are a thousand of them along the stick. If you imagine a large box that is 1 meter square on the bottom by 1 meter high (a cubic meter), then there will be 1 billion millimeter-sized boxes contained inside. Since a sand grain is perhaps a millimeter across, then a box of sand 3 feet high, 3 feet wide, and 3 feet deep would contain something like a billion grains. If you were to lay out all the millimeter boxes next to each other in a line, it would stretch a distance of 1000 kilometers, or about 600 miles (from Oklahoma City to Denver, roughly) --- if atoms were just barely visible, like grains of sand, then everyday objects like apples would be the size of a few states across. Of course, this works for counting anything, so it's also interesting to try to imagine a million seconds, which is about 11 and a half days. A billion seconds, though, is almost 32 years; you are likely to live somewhere between 2 and 3 billion seconds. The Universe has been around about a billion human lifetimes, or about a million times longer than modern humans have existed. It'll be useful to remember that these expansive scales exist not only for space, but for time. We tend to think that a second is a short snippet of time, but there are processes that happen on fantastically short timescales. Ordinary yellow light, for example, is ultimately the result of something vibrating 600 trillion (a thousand billion) times each second! There are very important reactions we'll discuss later that occur only for a duration of \(10^{-18}\) seconds (a billionth of a billionth of a second). In perspective, there have only been about \(10^{18}\) seconds since the Big Bang (half that, if you're being picky), so as many of these events could occur each second as there have been seconds since the beginning of the Universe! The relative size of the Sun and the Earth --- the Sun is more than 100 times larger across, and over a million Earths would fill up the Sun's volume. Now let's look at the sizes of objects again in the first image. The first step from an atom to the everyday scales represented by an apple is an increase in scale by a factor of a billion. If we take the next step and line up a billion apples, then we're approaching the size of the Sun (see the above image!). Really, the Sun is about 15 billion apples across --- truly gigantic compared to any objects humans have been accustomed to dealing with throughout our history. It is remarkable that just in the last 100 years or so we have pretty well figured out the physics of atomic interactions a billion times smaller than we as well as the inner workings of the Sun and other stars ten billion times larger. It's a wonderful thing that the Universe seems to obey understandable laws on these wildly different scales. We routinely study objects on even larger scales, though. 200 billion stars more or less like the Sun are gathered in our Milky Way galaxy, a truly immense structure that is about six billion times larger than the distance from the Earth to the Sun! Just as roughly the size of an atom is to an apple, and that apple is to the Sun, our solar system is to our enveloping galaxy. It is fascinating that we can view the gigantic Milky Way every clear night from Earth; our perspective in trying to fully comprehend this object is in some way similar to an atom's perspective of aggregate objects such as we. A slice through our Universe showing the distribution of galaxy clusters But the Milky Way galaxy is not the largest structure in the Universe, although 100 years ago we were not aware of anything larger. There are estimated to be around 150 billion galaxies in the observable Universe, which apparently spans the equivalent distance of about a million Milky Ways across and is arrayed in beautiful networks of galaxy clusters and superclusters, as shown here (each dot represents a galaxy cluster, with ours at the center of this map). Only now, for the first time in human history, we are able to map the large-scale structure of the Universe. Many previous cultures wondered and guessed at the nature of the Universe as a whole and what it might look like. We are the first privileged generation to finally address and answer these fundamental questions addressing our cosmic context and place in the Universe. Labels: Science Ed, TAU Chapter 1 -- Sizes and Scales, TAU text, Teaching A beautiful function... \[\int_{-\infty}^{+\infty} e^{-x^2} = \sqrt{\pi}\] Follow @tadthurston eBook Writing (5) Science Ed (9) Sports statistics (1) TAU Chapter 0 -- Introduction (1) TAU Chapter 1 -- Sizes and Scales (1) TAU Chapter 2 -- Sky Motion (1) TAU Chapter 3 -- Seasons and Phases (1) TAU text (5) TAU Introduction Writing an eBook Not that Sane Technical posts on social media Bacon, Eggs, and Beer Evolution and Secondary Causality
CommonCrawl
Second order splitting of a class of fourth order PDEs with point constraints by Charles M. Elliott and Philip J. Herbert HTML | PDF We formulate a well-posedness and approximation theory for a class of generalised saddle point problems with a specific form of constraints. In this way we develop an approach to a class of fourth order elliptic partial differential equations with point constraints using the idea of splitting into coupled second order equations. An approach is formulated using a penalty method to impose the constraints. Our main motivation is to treat certain fourth order equations involving the biharmonic operator and point Dirichlet constraints for example arising in the modelling of biomembranes on curved and flat surfaces but the approach may be applied more generally. The theory for well-posedness and approximation is presented in an abstract setting. Several examples are described together with some numerical experiments. M. Alkämper, A. Dedner, R. Klöfkorn, and M. Nolte, The dune-alugrid module., Archive of Numerical Software, 4 (2016), pp. 1–28. Douglas N. Arnold, Richard S. Falk, and Ragnar Winther, Mixed finite element methods for linear elasticity with weakly imposed symmetry, Math. Comp. 76 (2007), no. 260, 1699–1723. MR 2336264, DOI 10.1090/S0025-5718-07-01998-9 M. Blatt, A. Burchardt, A. Dedner, C. Engwer, J. Fahlke, B. Flemisch, C. Gersbacher, C. Gräser, F. Gruber, C. Grüninger, D. Kempf, R. Klöfkorn, T. Malkmus, S. Müthing, M. Nolte, M. Piatkowski, and O. Sander, The distributed and unified numerics environment, version 2.4, Archive of Numerical Software, 4 (2016), pp. 13–29. Daniele Boffi, Franco Brezzi, and Michel Fortin, Mixed finite element methods and applications, Springer Series in Computational Mathematics, vol. 44, Springer, Heidelberg, 2013. MR 3097958, DOI 10.1007/978-3-642-36519-5 James H. Bramble, Joseph E. Pasciak, and Olaf Steinbach, On the stability of the $L^2$ projection in $H^1(\Omega )$, Math. Comp. 71 (2002), no. 237, 147–156. MR 1862992, DOI 10.1090/S0025-5718-01-01314-X G. Buttazzo and S. A. Nazarov, An optimization problem for the biharmonic equation with Sobolev conditions, J. Math. Sci. (N.Y.) 176 (2011), no. 6, 786–796. Problems in mathematical analysis. No. 58. MR 2838975, DOI 10.1007/s10958-011-0436-1 G. F. Carey and R. Krishnan, Penalty approximation of Stokes flow, Comput. Methods Appl. Mech. Engrg. 35 (1982), no. 2, 169–206. MR 682127, DOI 10.1016/0045-7825(82)90133-5 Eduardo Casas, $L^2$ estimates for the finite element method for the Dirichlet problem with singular data, Numer. Math. 47 (1985), no. 4, 627–632. MR 812624, DOI 10.1007/BF01389461 P. Ciarlet Jr., Jianguo Huang, and Jun Zou, Some observations on generalized saddle-point problems, SIAM J. Matrix Anal. Appl. 25 (2003), no. 1, 224–236. MR 2002909, DOI 10.1137/S0895479802410827 Gerhard Dziuk and Charles M. Elliott, Finite element methods for surface PDEs, Acta Numer. 22 (2013), 289–396. MR 3038698, DOI 10.1017/S0962492913000056 C. M. Elliott, D. A. French, and F. A. Milner, A second order splitting method for the Cahn-Hilliard equation, Numer. Math. 54 (1989), no. 5, 575–590. MR 978609, DOI 10.1007/BF01396363 Charles M. Elliott, Hans Fritz, and Graham Hobbs, Small deformations of Helfrich energy minimising surfaces with applications to biomembranes, Math. Models Methods Appl. Sci. 27 (2017), no. 8, 1547–1586. MR 3666332, DOI 10.1142/S0218202517500269 Charles M. Elliott, Hans Fritz, and Graham Hobbs, Second order splitting for a class of fourth order equations, Math. Comp. 88 (2019), no. 320, 2605–2634. MR 3985470, DOI 10.1090/mcom/3425 Charles M. Elliott, Carsten Gräser, Graham Hobbs, Ralf Kornhuber, and Maren-Wanda Wolf, A variational approach to particles in lipid membranes, Arch. Ration. Mech. Anal. 222 (2016), no. 2, 1011–1075. MR 3544322, DOI 10.1007/s00205-016-1016-9 C. M. Elliott, L. Hatcher, and P. J. Herbert, Small deformations of spherical biomembranes, Advanced Studies in Pure Mathematics series, vol. 85, "The role of metrics in the theory of partial differential equations,� Mathematical Society of Japan, Tokyo (2020), pp. 39–61 (to appear). Alexandre Ern and Jean-Luc Guermond, Theory and practice of finite elements, Applied Mathematical Sciences, vol. 159, Springer-Verlag, New York, 2004. MR 2050138, DOI 10.1007/978-1-4757-4355-5 Vivette Girault and Pierre-Arnaud Raviart, Finite element methods for Navier-Stokes equations, Springer Series in Computational Mathematics, vol. 5, Springer-Verlag, Berlin, 1986. Theory and algorithms. MR 851383, DOI 10.1007/978-3-642-61623-5 Carsten Gräser and Tobias Kies, Discretization error estimates for penalty formulations of a linearized Canham–Helfrich-type energy, IMA J. Numer. Anal. 39 (2019), no. 2, 626–649. MR 3941880, DOI 10.1093/imanum/drx071 W. Helfrich, Elastic properties of lipid bilayers: Theory and possible experiments, Zeitschrift für Naturforschung C, 28 (1973), pp. 693–703. G. Hobbs, Particles and biomembranes: A variational PDE approach, Ph.D. thesis, The University of Warwick Mathematics Institute, 2016. R. Bruce Kellogg and Biyue Liu, A finite element method for the compressible Stokes equations, SIAM J. Numer. Anal. 33 (1996), no. 2, 780–788. MR 1388498, DOI 10.1137/0733039 Alfred H. Schatz, A weak discrete maximum principle and stability of the finite element method in $L_{\infty }$ on plane polygonal domains. I, Math. Comp. 34 (1980), no. 149, 77–91. MR 551291, DOI 10.1090/S0025-5718-1980-0551291-3 Retrieve articles in Mathematics of Computation with MSC (2010): 65N30, 65J10, 35J35 Retrieve articles in all journals with MSC (2010): 65N30, 65J10, 35J35 Charles M. Elliott Affiliation: Mathematics Institute, Zeeman Building, University of Warwick, Coventry, CV4 7AL, United Kingdom MR Author ID: 62960 Email: [email protected] Philip J. Herbert Email: [email protected] Received by editor(s) in revised form: April 2, 2020 Published electronically: July 27, 2020 Additional Notes: The work of the first author was partially supported by the Royal Society via a Wolfson Research Merit Award. The research of the second author was funded by the Engineering and Physical Sciences Research Council grant EP/H023364/1 under the MASDOC centre for doctoral training at the University of Warwick. MSC (2010): Primary 65N30, 65J10, 35J35 DOI: https://doi.org/10.1090/mcom/3556
CommonCrawl
Dov Gordon [email protected] Office hours, Spring 2019: Mon 11:30 - 12:30 and 3:00 - 4:00 I am primarily interested in the problem of computing on encrypted data. To maintain privacy and security, it is increasingly important that our information remain encrpyted not just when at rest, but at all times. We have to balance this need with the desire that our data remain useful. My research looks at how we can achieve both goals. I am interested both in the foundational aspects of this question (what is even possible?), and in how we can make such techniques practical in the real world. CS330: Formal Methods and Models (Spring 2019) CS795: Topics in Privacy, Anonymity and Fairness (Fall, 2018) CS600: Theory of Computation (Spring 2018) ISA562: Information Security, Theory and Practice (Fall, 2017) CS795: Introduction to Cryptography (Fall 2016) I joined George Mason University as an assistant professor in Fall, 2015. From 2012 until 2015, I was a research scientist at Applied Communication Sciences (ACS), where I did research in cryptography and cyber security. Prior to that, I was a postdoc at Columbia University with Tal Malkin, as a recipient of the Computing Innovation Fellowship. I received my PhD in July 2010 with Jonathan Katz in the computer science department at the University of Maryland. Here's my curriculum vitae (PDF). Click to read the abstract and download the paper, if available. Differentially Private Access Patterns in Secure Computation. Sahar Mazloom and S. Dov Gordon In submission. We explore a new security model for secure com- putation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations. Secure Computation of MIPS Machine Code. Xiao Wang, S. Dov Gordon, Allen McIntosh, and Jonathan Katz ESORICS 2016 Existing systems for secure computation require programmers to express the program to be securely computed as a circuit, or in some domain-specific language that can be compiled to a form suitable for applying known protocols. We propose a new system that can securely execute native MIPS code with no special annotations. Our system has the advantage of allowing programmers to use a language of their choice to express their programs, together with any off-the-shelf compiler to MIPS; it can be used for secure computation of existing "legacy" MIPS code as well. Our system uses oblivious RAM for fetching instructions and performing load/store operations in memory, and garbled universal circuits for the execution of a MIPS ALU in each instruction step. We also explore various optimizations based on an offline analysis of the MIPS code to be executed, in order to minimize the overhead of executing each instruction while still maintaining security. Leakage-Resilient Public-Key Encryption from Obfuscation. Dana Dachman-Soled, S. Dov Gordon, Feng-Hao Liu, Adam O'Neill, and Hong-Sheng Zhou PKC 2016 The literature on leakage-resilient cryptography contains various leakage models that provide different levels of security. In this work, we consider the \emph{bounded leakage} and the \emph{continual leakage} models. In the bounded leakage model (Akavia et al. -- TCC 2009), it is assumed that there is a fixed upper bound LL on the number of bits the attacker may leak on the secret key in the entire lifetime of the scheme. Alternatively, in the continual leakage model (Brakerski et al. -- FOCS 2010, Dodis et al. -- FOCS 2010), the lifetime of a cryptographic scheme is divided into ``time periods'' between which the scheme's secret key is updated. Furthermore, in its attack the adversary is allowed to obtain some bounded amount of leakage on the current secret key during each time period. In the continual leakage model, a challenging problem has been to provide security against \emph{leakage on key updates}, that is, leakage that is a function not only of the current secret key but also the \emph{randomness used to update it}. We propose a new, modular approach to overcome this problem. Namely, we present a compiler that transforms any public-key encryption or signature scheme that achieves a slight strengthening of continual leakage resilience, which we call \emph{consecutive} continual leakage resilience, to one that is continual leakage resilient with leakage on key updates, assuming \emph{indistinguishability obfuscation} (Barak et al. --- CRYPTO 2001, Garg et al. -- FOCS 2013). Under the stronger assumption of \emph{public-coin differing-inputs obfuscation} (Ishai et al. -- TCC 2015) the leakage rate tolerated by our compiled scheme is essentially as good as that of the starting scheme. Our compiler is obtained by making a new connection between the problems of leakage on key updates and so-called ``sender-deniable'' encryption (Canetti et al. -- CRYPTO 1997), which was recently realized for the first time by Sahai and Waters (STOC 2014). In the bounded leakage model, we develop a new approach to constructing leakage-resilient encryption from obfuscation, based upon the public-key encryption scheme from \iO\iO and punctured pseudorandom functions due to Sahai and Waters (STOC 2014). In particular, we achieve leakage-resilient public key encryption tolerating LL bits of leakage for any LL from \iO\iO and one-way functions. We build on this to achieve leakage-resilient public key encryption with optimal leakage rate of 1−o(1)1−o(1) based on public-coin differing-inputs obfuscation and collision-resistant hash functions. Such a leakage rate is not known to be achievable in a generic way based on public-key encryption alone. We then develop entirely new techniques to construct a new public key encryption scheme that is secure under (consecutive) continual leakage resilience (under appropriate assumptions), which we believe is of independent interest. Constant-Round MPC with Fairness and Guarantee of Output Delivery. S. Dov Gordon, Feng-Hao Liu, and Elaine Shi Crypto 2015 We study the round complexity of multiparty computation with fairness and guaranteed output delivery, assuming existence of an honest majority. We demonstrate a new lower bound and a matching upper bound. Our lower bound rules out any two-round fair protocols in the standalone model, even when the parties are given access to a common reference string (CRS). The lower bound follows by a reduction to the impossibility result of virtual black box obfuscation of arbitrary circuits. Then we demonstrate a three-round protocol with guarantee of output delivery, which in general is harder than achieving fairness (since the latter allows the adversary to force a fair abort). We develop a new construction of a threshold fully homomorphic encryption scheme, with a new property that we call ``flexible'' ciphertexts. Roughly, our threshold encryption scheme allows parties to adapt flexible ciphertexts to the public keys of the non-aborting parties, which provides a way of handling aborts without adding any communication. Multi-Input Functional Encryption S. Dov Gordon, Jonathan Katz, Feng-Hao Liu, Elaine Shi and Hong-Sheng Zhou Eurocrypt 2014 Functional encryption (FE) is a powerful primitive enabling fine-grained access to encrypted data. In an FE scheme, secret keys ("tokens") correspond to functions; a user in possession of a ciphertext ct = Enc(x) and a token TKf for the function f can compute f(x) but learn nothing else about x. An active area of research over the past few years has focused on the development of ever more expressive FE schemes. In this work we introduce the notion of multi-input functional encryption. Here, informally, a user in possession of a token TKf for an n-ary function f and multiple ciphertexts ct1 = Enc(x1 ),... , ct_n = Enc(x_n) can compute f(x1,...,xn) but nothing else about the {xi}. Besides introducing the notion, we explore the feasibility of multi-input FE in the public-key and symmetric-key settings, with respect to both indistinguishability-based and simulation-based definitions of security. Download the paper here. Multi-Client Verifiable Computation with Stronger Security Guarantees S. Dov Gordon, Jonathan Katz, Feng-Hao Liu, Elaine Shi and Hong-Sheng Zhou TCC, 2015. At TCC 2013, Choi et al. introduced the notion of multi-client verifiable computation in which a set of clients outsource to an untrusted server the computation of a function f over their collective inputs in a sequence of time periods. In that work, the authors defined and realized multi-client verifiable computation satisfying soundness against a malicious server and privacy against the semi-honest corruption of a single client. We explore the possibility of achieving stronger security guarantees in this setting, in several respects. We begin by introducing a simulation-based notion of security in the universal com- posability framework, which provides a clean way of defining soundness and privacy in a single definition. We show the notion is impossible to achieve, even in the semi-honest case, if client- server collusion is allowed. Faced with this result, we explore several meaningful relaxations and give constructions realizing them. On the Relationship between Functional Encryption, Obfuscation, and Fully Homomorphic Encryption Joël Alwen, Manuel Barbosa, Pooya Farshim, Rosario Gennaro, S. Dov Gordon, Stefano Tessaro, and David A. Wilson IMA Conference on Cryptography and Coding 2013 We investigate the relationship between Functional Encryption (FE) and Fully Homomorphic Encryption (FHE), demonstrating that, under certain assumptions, a Functional Encryption scheme supporting evaluation on two ci- phertexts implies Fully Homomorphic Encryption. We first introduce the notion of Randomized Functional Encryption (RFE), a generalization of Functional En- cryption dealing with randomized functionalities of interest in its own right, and show how to construct an RFE from a (standard) semantically secure FE. For this we define the notion of entropically secure FE and use it as an intermediary step in the construction. Finally we show that RFEs constructed in this way can be used to construct FHE schemes thereby establishing a relation between the FHE and FE primitives. We conclude the paper by recasting the construction of RFE schemes in the context of obfuscation. Multi-party Computation of Polynomials and Branching Programs without Simultaneous Interaction. S. Dov Gordon, Tal Malkin, Mike Rosulek and Hoteck Wee Eurocrypt 2013 Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously. In this work we present a suite of new, simple and efficient protocols for secure computation in this "one-pass" model. We give protocols that obtain optimal privacy for the following general tasks: -- Evaluating any multivariate polynomial $F(x_1, \ldots ,x_n)$ (modulo a large RSA modulus N), where the parties each hold an input $x_i$. -- Evaluating any read once branching program over the parties' inputs. As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata and decision trees. Download the paper here. Secure Two-Party Computation in Sublinear (Amortized) Time Dov Gordon, Jonathan Katz, Vladimir Kolesnikov, Fernando Krell, Tal Malkin, Mariana Raykova, Yevgeniy Vahlis CCS 2012 Download. (Note that this proceedings version is considerably different from the ePrint version.) Traditional approaches to generic secure computation begin by representing the function f being computed as a circuit. If f depends on each of its input bits, this implies a protocol with complexity at least linear in the input size. In fact, linear running time is inherent for non-trivial functions since each party must "touch" every bit of their input lest information about the other party's input be leaked. This seems to rule out many applications of secure computation (e.g., database search) in scenarios where inputs are huge. Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two-party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine (RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our protocol applies generic secure two-party computation on top of oblivious RAM (ORAM). We present an optimized version of our protocol using Yao's garbled-circuit approach and a recent ORAM construction of Shi et al. We describe an implementation of this protocol, and evaluate its performance for the task of obliviously searching a database with over 1 million entries. Because of the cost of our basic steps, our solution is slower than Yao on small inputs. However, our implementation outperforms Yao already on DB sizes of 2^18 entries (a quite small DB by today's standards). A Group Signature Scheme From Lattice Assumptions Dov Gordon, Jonathan Katz, and Vinod Vaikuntanathan Asiacrypt 2010 Group signature schemes allow users to sign messages on behalf of a group while (1) main- taining anonymity (within that group) with respect to an observer, yet (2) ensuring traceability of a signer (by the group manager) when needed. In this work we give the first construction of a group signature scheme based on lattices (more precisely, the learning with errors assump- tion), in the random oracle model. Toward our goal, we construct a new algorithm for sampling a random superlattice of a given modular lattice together with a short basis, that may be of independent interest. Partial Fairness in Secure Two-Party Computation Dov Gordon and Jonathan Katz Eurocrypt 2010 A seminal result of Cleve (STOC '86) is that, in general, \emph{complete} fairness is impossible to achieve in two-party computation. In light of this, various techniques for obtaining \emph{partial} fairness have been suggested in the literature. We propose a definition of partial fairness within the standard real-/ideal-world paradigm that addresses deficiencies of prior definitions. We also show broad feasibility results with respect to our definition:~partial fairness is possible for any (randomized) functionality $f:X \times Y \rightarrow Z_1 \times Z_2$ at least one of whose domains or ranges is polynomial in size. Our protocols are always private, and when one of the domains has polynomial size our protocols also simultaneously achieve the usual notion of security with abort. In contrast to some prior work, we rely on standard assumptions only. We also show that, as far as general feasibility is concerned, our results are \emph{optimal} (with respect to our definition). Specifically, there exist functions with super-polynomial domain and range for which it is impossible to achieve our definition. On Complete Primitives for Fairness Dov Gordon, Yuval Ishai, Tal Moran, Rafail Ostrovsky and Amit Sahai TCC 2010 For secure two-party and multi-party computation with abort, classification of which primitives are {\em complete} has been extensively studied in the literature. However, for \emph{fair} secure computation, where (roughly speaking) either all parties learn the output or none do, the question of complete primitives has remained largely unstudied. In this work, we initiate a rigorous study of completeness for primitives that allow fair computation. We show the following results: - \textbf{No ``short'' primitive is complete for fairness.} In surprising contrast to other notions of security for secure two-party computation, we show that for fair secure two-party computation, no primitive of size $O(\log k)$ is complete, where $k$ is a security parameter. This is the case even if we can enforce parallelism in calls to the primitives (i.e., the adversary does not get output from any primitive in a parallel call until it sends input to all of them). This negative result holds regardless of any computational assumptions. - \textbf{Coin Flipping and Simultaneous Broadcast are not complete for fairness.} The above result rules out the completeness of two natural candidates: coin flipping (for any number of coins) and simultaneous broadcast (for messages of arbitrary length). - \textbf{Positive results.} To complement the negative results, we exhibit a $k$-bit primitive that \emph{is} complete for two-party fair secure computation. This primitive implements a ``fair reconstruction'' procedure for a secret sharing scheme with some robustness properties. We show how to generalize this result to the multi-party setting. - \textbf{Fairness combiners.} We also introduce the question of constructing a protocol for fair secure computation from primitives that may be faulty. We show a simple functionality that is complete for two-party fair computation when the majority of its instances are honest. On the flip side, we show that this result is tight: no functionality is complete for fairness if half (or more) of the instances can be malicious. On the Round Complexity of Zero-Knowledge Proofs Based on One-Way Permutations Dov Gordon, Hoeteck Wee, David Xiao, and Arkady Yerukhimovich Latincrypt 2010 We consider the following problem: can we construct constant-round zero-knowledge proofs (with negligible soundness) for $\NP$ assuming only the existence of one-way permutations? We answer the question in the negative for fully black-box constructions (using only black-box access to both the underlying primitive and the cheating verifier) that satisfy a natural restriction on the ``adaptivity'' of the simulator's queries. Specifically, we show that only languages in $\coAM$ have constant-round zero-knowledge proofs of this kind. Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure Dov Gordon, Jonathan Katz, Ranjit Kumaresan and Arkady Yerukhimovich Symposium on Stabilization, Safety and Security of Distributed Systems, 2010 Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Almost all existing protocols, however, do not distinguish between \emph{corrupted} parties (who do not follow the protocol), and \emph{honest} parties whose secret (signing) keys have been compromised (but who continue to behave honestly). We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter. Consider a network of $n$ parties, where an adversary has compromised the secret keys of up to $t_c$ honest parties and, in addition, fully controls the behavior of up to $t_a$ other parties. We show that for any fixed $t_c > 0$, and any fixed $t_a$, there exists an efficient protocol for broadcast if and only if $2t_a + \min(t_a, t_c) < n$. (When $t_c = 0$, standard results imply feasibility.) We also show that if $t_c, t_a$ are not fixed, but are only guaranteed to satisfy the bound above, then broadcast is impossible to achieve except for a few specific values of~$n$; for these ``exceptional'' values of~$n$, we demonstrate a broadcast protocol. Taken together, our results give a complete characterization of this problem. Invited for a special issue in Elsevier's Information and Computation journal. Complete Fairness in Multi-Party Computation without an Honest Majority Dov Gordon and Jonathan Katz Theory of Cryptography Conference, 2009 Gordon et al.\ recently showed that certain (non-trivial) functions can be computed with complete fairness in the \emph{two-party} setting. Motivated by their results, we initiate a study of complete fairness in the \emph{multi-party} case and demonstrate the first completely-fair protocols for non-trivial functions in this setting. We also provide evidence that achieving fairness is "harder" in the multi-party setting, at least with regard to round complexity. Complete Fairness in Secure Two-Party Computation Dov Gordon, Carmit Hazay, Jonathan Katz and Yehuda Lindell ACM Symposium on Theory of Computing (STOC) 2008 In the setting of secure two-party computation, two mutually distrusting parties wish to compute some function of their inputs while preserving, to the extent possible, various security properties such as privacy, correctness, and more. One desirable property is \emph{fairness}, which guarantees that if either party receives its output, then the other party does too. Cleve (STOC~1986) showed that complete fairness cannot be achieved \emph{in general} in the two-party setting; specifically, he showed (essentially) that it is impossible to compute Boolean XOR with complete fairness. Since his work, the accepted folklore has been that \emph{nothing} non-trivial can be computed with complete fairness, and the question of complete fairness in secure two-party computation has been treated as closed since the late '80s. In this paper, we demonstrate that this widely held folklore belief is \emph{false} by showing completely-fair secure protocols for various non-trivial two-party functions including Boolean AND/OR as well as Yao's ``millionaires' problem''. Surprisingly, we show that it is even possible to construct completely-fair protocols for certain functions containing an ``embedded XOR'', although in this case we also prove a lower bound showing that a super-logarithmic number of rounds are necessary. Our results demonstrate that the question of completely-fair secure computation without an honest majority is far from closed. Rational Secret Sharing, Revisited Dov Gordon and Jonathan Katz Security and Cryptography for Networks 2006 We consider the problem of secret sharing among $n$ rational players. This problem was introduced by Halpern and Teague (STOC 2004), who claim that a solution is \emph{impossible} for $n=2$ but show a solution for the case $n\geq 3$. Contrary to their claim, we show a protocol for rational secret sharing among $n=2$ players; our protocol extends to the case $n\geq 3$, where it is simpler than the Halpern-Teague solution and also offers a number of other advantages. We also show how to avoid the continual involvement of the dealer, in either our own protocol or that of Halpern and Teague. Our techniques extend to the case of rational players trying to securely compute an arbitrary function, under certain assumptions on the utilities of the players. Last updated: Contact Dov
CommonCrawl
Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up. Mathematical calculation behind decision tree classifier with continuous variables I am working on a binary classification problem having continuous variables (Gene expression Values). My goal is to classify the samples as case or control using gene expression values (from Gene-A, Gene-B and Gene-C) using decision tree classifier. I am using the entropy criteria for node splitting and is implementing the algorithm in python. The classifier is easily able to differentiate the samples. Below is the sample data, sample training set with labels Gene-A Gene-B Gene-C Sample 1 0 38 Case 0 7 374 Case 0 2 538 Control 33 5 860 Control sample testing set labels 1 6 394 Case 13 4 777 Control I have gone through a lot of resources and have learned, how to mathematically calculate Gini-impurity, entropy and information gain. I am not able to comprehend how the actual training and testing work. It would be really helpful if someone can show the calculation for training and testing with my sample datasets or provide an online resource? machine-learning math decision-trees spriyansh29spriyansh29 Of course, it depends on what algorithm you use. Typically, a top-down algorithm is used. You gather all the training data at the root. The base decision is going to be whatever class you have most of. Now, we see if we can do better. We consider all possible splits. For categorical variables, every value gets its own node. For continuous variables, we can use any possible midpoint between two values (if the values were sorted). For your example, possible splits are Gene-A < 0.5, Gene-A < 17, Gene-B < 1, Gene-B < 3.5, and so on. There is a total of 10 possible splits. For each of those candidate splits, we measure how much the entropy decreases (or whatever criterion we selected) and, if this decrease looks significant enough, we introduce this split. For example. Our entropy in the root node is $-0.4 \log_2 0.4 - 0.6 \log_2 0.6 \approx 0.97$. If we introduce the split Gene-A < 0.5, we get one leaf with entropy $1$ (with 2 data points in it), and one leaf with entropy $0.918$ (with 3 data points). The total decrease of entropy is $0.97 - (\frac25 \times 1 + \frac35 \times 0.918) \approx 0.02$. For the split Gene-A < 17 we get a decrease of entropy of about $0.3219$. The best splits for the root are Gene-B < 5.5 and Gene-C < 456. These both reduce the entropy by about $0.42$, which is a substantial improvement. When you choose a split, you introduce a leaf for the possible outcomes of the test. Here it's just 2 leaves: "yes, the value is smaller than the threshold" or "no, it is not smaller". In every leaf, we collect the training data from the parent that corresponds to this choice. So, if we select Gene-B < 5.5 as our split, the "yes" leaf will contain the first, fourth and fifth data points, and the "no" leaf will contain the other data points. Then we continue, by repeating the process for each of the leaves. In our example, the "yes" branch can still be split further. A good split would be Gene-C < 288, which results in pure leaves (they have 0 entropy). When a leaf is "pure enough" (it has very low entropy) or we don't think we have enough data, or the best split for a leaf is not a significant improvement, or we have reached a maximum depth, you stop the process for that leaf. In this leaf you can store the count for all the classes you have in the training data. If you have to make a prediction for a new data point (from the test set), you start at the root and look at the test (the splitting criterion). For example, for the first test point, we have that Gene-B < 5.5 is false, so we go to the 'no' branch. You continue until you get to a leaf. In a leaf, you would predict whatever class you have most of. If the user wants, you can also output a probability by giving the proportion. For the first test point, we go to the "no" branch of the first test, and we end up in a leaf; our prediction would be "Case". For the second test point, we go to the "yes" branch of the first test. Here we test whether 777 < 288, which is false, so we go to the "no" branch, and end up in a leaf. This leaf contains only "Control" cases, so our prediction would be "Control". Robby GoetschalckxRobby Goetschalckx Thanks for contributing an answer to Artificial Intelligence Stack Exchange! Not the answer you're looking for? Browse other questions tagged machine-learning math decision-trees or ask your own question. How does a decision tree split a continuous feature? Are decision tree learning algorithms deterministic? At which point we have to stop post pruning in decision tree? What are possible functions assigned on decision nodes for decision tree prediction? How could decision tree learning algorithms cope with imbalanced classes? How does the decision tree implicitly do feature selection? Why isn't my decision tree classifier able to solve the XOR problem properly? What does the depth of a decision tree depend on?
CommonCrawl
QCD Color Structure relation Why group elements associated with gauge transformations of finite action field configurations in QCD don't depend in $r$? Manifest covariant color superconductivity? Color factor for squark-quark-antiquark vertex How to sum over final, and average over initial color states? Color Confinement under Color charge dipole and quadrupole moments To which extent is a semiclassical picture of QCD valid? Polarization Sums in QCD for the calculation of parton model splitting functions Why is a superposition of vacuum states possible in QCD, but not in electroweak theory? Is there an explanation for the 3:2:1 ratio between the electron, up and down quark electric charges? Why is color conserved in QCD? According to Noether's theorem, global invariance under $SU(N)$ leads to $N^2-1$ conserved charges. But in QCD gluons are not conserved; color is. There are N colors, not $N^2-1$ colors. Am I misunderstanding Noether's theorem? My only guess (which is not made clear anywhere I can find) is that there are $N_R^2-1$ conserved charges, where $N_R$ is the dimension of the representation of SU(N) that the matter field transforms under. I think I can answer my own question by saying that eight color combinations are conserved which do correspond to the colors carried by gluons. Gluon number is obviously not conserved, but the color currents of each gluon type are conserved. An arbitrary number of gluons can be created from the vacuum without violating color conservation because color pair production {$r,\bar{r}$}, {$g,\bar{g}$}, {$b,\bar{b}$} does not affect the overal color flow. Lubos or anyone please correct me if this is wrong, or if you want to clean it up and incorporate it into your answer Lubos I will accept your answer. This post imported from StackExchange Physics at 2015-04-11 10:31 (UTC), posted by SE-user user1247 gauge-theory quantum-chromodynamics noethers-theorem colour-charge asked Mar 14, 2013 in Theoretical Physics by user1247 (540 points) [ no revision ] retagged Apr 11, 2015 Related question by OP: physics.stackexchange.com/q/56866/2451 This post imported from StackExchange Physics at 2015-04-11 10:31 (UTC), posted by SE-user Qmechanic commented Mar 14, 2013 by Qmechanic (2,860 points) [ no revision ] @DJBunk, drawing diagrams to convince myself, it would seem to me that it is the three primary colors that are conserved, hence my confusion. commented Mar 14, 2013 by user1247 (540 points) [ no revision ] Excellent question! It's definitely color charge that is conserved (gluon number is not). But I don't know what the resolution of this issue is. This post imported from StackExchange Physics at 2015-04-11 10:31 (UTC), posted by SE-user David Z commented Mar 14, 2013 by David Z (660 points) [ no revision ] Dear @David, I am a sort of a fan of yours but this basic confusion of yours came as a big surprise to me. This post imported from StackExchange Physics at 2015-04-11 10:31 (UTC), posted by SE-user Luboš Motl commented Mar 14, 2013 by Luboš Motl (10,278 points) [ no revision ] @Luboš what confusion are you talking about? If you mean the fact that I don't know how to answer the question, it hardly seems like the sort of thing a grad student should be expected to know off the top of their head (except for one who specializes in group theory). @lubos, it would be extremely helpful if you would compare what I say in my edit to what wikipedia says (en.wikipedia.org/wiki/Gluon#Eight_gluon_colors), where they They specifically associate the 8 gluon color combinations with the gell-mann matrices. This is common in other texts. Either I'm right, or everybody else is stupid. It would be great if, instead of continuing to call me stupid, if you would actually try to address this discrepancy, and in so doing, my question. This seems to be a rather elementary confusion, it's like asking since Lorentz transformations are 4 by 4 matrices and act on column vectors with 4 independent entries, how can there be more than 4 conserved quantities originating from Lorentz invariance? commented Apr 11, 2015 by Jia Yiyang (2,640 points) [ no revision ] Global invariance under $SU(N)$ is equivalent to the conservation of $N^2-1$ charges – these charges are nothing else than the generators of the Lie algebra ${\mathfrak su}(N)$ that mix some components of $SU(N)$ multiplets with other components of the same multiplets. These charges don't commute with each other in general. Instead, their commutators are given by the defining relations of the Lie algebra, $$ [\tau_i,\tau_j] = f_{ij}{}^k \tau_k $$ But these generators $\tau_i$ are symmetries because they commute with the Hamiltonian, $$[\tau_i,H]=0.$$ None of these charges may be interpreted as the "gluon number". This identification is completely unsubstantiated not only in QCD but even in the simpler case of QED. What is conserved in electrodynamics because of the $U(1)$ symmetry is surely not the number of photons! It's the electric charge $Q$ which is something completely different. In particular, photons don't carry any electric charge. Similarly, this single charge $Q$ – generator of $U(1)$ – is replaced by $N^2-1$ charges $\tau_i$, the generators of the algebra ${\mathfrak su}(N)$, in the case of the $SU(N)$ group. Also, it's misleading – but somewhat less misleading – to suggest that the conserved charges in the globally $SU(N)$ invariant theories are just the $N$ color charges. What is conserved – what commutes with the Hamiltonian – is the whole multiplet of $N^2-1$ charges, the generators of ${\mathfrak su}(N)$. Non-abelian algebras may be a bit counterintuitive and the hidden motivation behind the OP's misleading claim may be an attempt to represent $SU(N)$ as a $U(1)^k$ because you may want the charges to be commuting – and therefore to admit simultaneous eigenstates (the values of the charges are well-defined at the same moment). But $SU(N)$ isn't isomorphic to any $U(1)^k$; the former is a non-Abelian group, the latter is an Abelian group. At most, you may embed a $U(1)^k$ group into $SU(N)$. There's no canonically preferred way to do so but all the choices are equivalent up to conjugation. But the largest commuting group one may embed into $SU(N)$ isn't $U(1)^N$. Instead, it is $U(1)^{N-1}$. The subtraction of one arises because of $S$ (special, determinant equals one), a condition restricting a larger group $U(N)$ whose Cartan subalgebra would indeed be $U(1)^N$. For example, in the case of $SU(3)$ of real-world QCD, the maximal commuting (Cartan) subalgebra of the group is $U(1)^2$. It describes a two-dimensional space of "colors" that can't be visualized on a black-and-white TV, to use the analogy with the red-green-blue colors of human vision. Imagine a plane with hexagons and triangles with red-green-blue and cyan-purple-yellow on the vertices. But grey, i.e. color-neutral, objects don't carry any charges under the Cartan subalgebra of $SU(N)$. For example, the neutron is composed of one red, one green, one blue valence quark. So you could say that it has charges $(+1,+1,+1)$ under the "three colors". But that would be totally invalid. A neutron (much like a proton) actually carries no conserved QCD "color" charges. It is neutral under the Cartan subalgebra $U(1)^2$ of $SU(3)$ because the colors of the three quarks are contracted with the antisymmetric tensor $\epsilon_{abc}$ to produce a singlet. In fact, it is invariant under all eight generators of $SU(3)$. It has to be so. All particles that are allowed to appear in isolation must be color singlets – i.e. carry vanishing values of all conserved charges in $SU(3)$ – because of confinement! So as far as the $SU(3)$ charges go, nothing prevents a neutron from decaying to completely neutral final products such as photons. It's only the (half-integral) spin $J$ and the (highly approximately) conserved baryon number $B$ that only allow the neutron to decay into a proton, an electron, and an antineutrino and that make the proton stable (so far) although the proton's decay to completely quark-free final products such as $e^+\gamma$ is almost certainly possible even if very rare. answered Mar 14, 2013 by Luboš Motl (10,278 points) [ no revision ] So it is just a coincidence that there are $N^2-1$ conserved charges and $N^2-1$ gauge fields? What are the conserved charges called? Can you point to a color combination or any specific example of one of the eight conserved charges? For example, are the three SU(2) weak charges the three components of weak isospin? "For example, baryons would have to contain 4 quarks to be color-neutral." -- wait, are you saying that in SU(2) weak theory particles must be "color neutral", where now there are two colors? What I mean about the "coincidence" is that the generators of SU(N) are both the conserved charges, and also correspond to the gauge fields, apparently. But the gauge fields are not conserved, the charges are. How can they both correspond to the generators, but only one is conserved? For SU(2) the generators are weak isospin T1,T2,T3. Shouldn't all three be conserved? Is only T3 conserved because of electroweak symmetry breaking? @user1247 QCD is in a strong coupling confining phase, while weak theory is in a weak coupling Higgs phase, so the low energy phenomenology of the two theories is completely different. If you wrote down a confining SU(2) analogy to QCD (not the physical electroweak theory) then you would have colour neutral combinations of two colour quarks. This post imported from StackExchange Physics at 2015-04-11 10:31 (UTC), posted by SE-user Michael Brown commented Mar 15, 2013 by Michael Brown (115 points) [ no revision ] Lubos, you say I "call the generators by labels like {red,greenbar} etc. That's silly and omitting some key information". But I am doing what everybody does. Is wikipedia (en.wikipedia.org/wiki/Gluon#Eight_gluon_colors) wrong then? They specifically associate the 8 gluon color combinations with the gell-mann matrices. Look, I'm trying to clarify my understanding of a hueristic here, one that is used perhaps sloppily by countless physicists, talking vaguely about "color conservation." Maybe they're all idiots, or maybe it could be a useful hueristic if used properly? @LubošMotl,@Manishearth, currently Lubos seems to call into question my motives rather than address my doubts. I am an experimental physicist genuinely trying to understand a heuristic that is described in almost every modern QFT textbook, and one described in wikipedia here (en.wikipedia.org/wiki/Gluon#Eight_gluon_colors). Instead of addressing this and fleshing it out, he prefers to tell me I'm trying to "invent some new misleading layman's caricature". commented Mar 17, 2013 by user1247 (540 points) [ revision history ]
CommonCrawl
Interesting insights on math. Financial derivatives, payoff functions and portfolios: motivation Key ideas in this post Payoff functions are central, derivatives are just ways to achieve specified payoff functions. Your entire portfolio is also a derivative. We are interested in payoff functions that maximise certain combinations of expectation, risk, and other moments (depending on the investor's preferences). Shorting is just "investing in the rest of the market" and is the natural way to get a payoff function of $-x$. When I first saw the definitions of several financial assets, I found them completely arbitrary -- it's not that I didn't get the reason one would have them, but rather that I saw no way to immediately understand them or a starting point for reasoning about them mathematically. Other than what was perhaps the most basic asset -- stocks (and also bonds, physical assets, etc.) and their baskets -- all the derivatives (and things that aren't called derivatives) based on them seemed really artificial in their construction. But this isn't exactly unfamiliar territory, is it? You've seen unmotivated definitions in mathematics, and you've seen that you need to put in quite a bit of effort to really motivate them and understand why they make perfect sense -- you've seen that, e.g. here. So let's do the same thing with finance. Let's start with a simple one: shorting. There is a certain asymmetry in the definitions of longing and shorting, isn't there? It's the "borrowing a stock" part of the definition of shorting that introduces this asymmetry. But if you've spent any time thinking about economics, the idea of borrowing something you don't have should be familiar -- it's what you do when you don't have any investment capital to start with, but you think you can grow the value of what you've borrowed by e.g. investing it in a stock. Let's phrase this in a slightly different (and by "slightly different", I mean "take the buying-selling dual of") way: How to invest in a stock without money at hand: Borrow some money, immediately "sell" the money for some stocks -- after some time has passed, "buy" back the money by returning the stocks. If the value of the stocks have increased, you'll get more money in return and be able to repay the loan. This is precisely symmetric to the situation of shorting -- longing an asset just means shorting money -- or more precisely, shorting the rest of the market. The apparent asymmetry between longing and shorting comes back from the fact that you are much more likely to already own some of "the rest of the market" than to own a particular stock -- for example, the unbounded losses of shorting arise from the fact that it's much easier for a single stock's value to skyrocket than for money's -- so in longing, there may still be ways for you to earn the money to repay it even if the value of the stock drops, i.e. the value of your other assets (e.g. your labour or property) relative to money would not have dropped. One advantage of this approach is that it is conceptually interesting -- and will hopefully allow us to transfer insights and ideas between stocks and shorts (except when certain approximations may be involved) -- another is that it immediately nullifies "moral" criticism of shorting, from e.g. Elon Musk, as it is really just the same as investing in the "rest of the market". Wait a minute -- but what if you actually just invested in "the rest of the market"? That would clearly have a much lower return than shorting the stock directly, right? Except you're thinking about investing in the rest of the market by paying money, not by paying the stock you're betting against -- that's a bet for the rest of the market against money, not against said stock. Well, shorting was an example where we wanted to bet that the price of an asset goes down. But in general, we may have any sort of weird prediction on the price of an asset -- maybe that it will "fluctuate a lot", or that it "won't exceed a certain level", or that it "will go up but only to a point", or that it "will reach a certain range". You may have any sort of elaborate probability distribution $\rho(x)$ on the value $x$ of the asset after a period of time. Given such a distribution, what you'd want to do (ignoring risk) is to maximise your expected return (minus the cost of buying the contract, of course): $$\chi=\int {\rho (x)f(x)dx} $$ Where $f(x)$ is the payoff you get if the asset reaches the price $x$ -- this is called the payoff function. Well, why not just take $f(x)$ to be arbitrarily high? Because the contract will be really expensive, of course. How expensive? Predicting that would require: not only the $\rho$ distribution on this asset as believed by each seller and buyer in the market but also the amount of capital they have and their beliefs about the future behavior of other assets in the market contracts on which they could buy instead And that is still not to mention the fact that people do not maximise the expected value of profit per say, but have varying levels of risk aversion. But that's alright -- we don't need to predict that. That price is crunched for us by the market and is the market price of the contract -- it is the market price. What's more important is to estimate $\chi = E_{\rho}[f(x)]$. Well, in fact, if we're concerned with risk, then we'd also be interested in the variance of the distribution -- and in general, an individual may also have a skewness or kurtosis preference (an example of a kurtosis preference would be among gamblers, who want heavy tails for the "big win"). In fact, $\chi$ can depend on multiple underlying assets: $$\chi=E_\rho[f(\mathbf{x})]$$ Where $\mathbf{x}$ is the vector of prices of each underlying asset. In fact, this multivariate $f$ can represent your entire portfolio of derivatives on assets. If $f(\mathbf{x})$ can be written as a sum of functions of each component, this can be considered as some number of separate univariate derivatives -- the reason such a portfolio is still useful is that of risk management, especially if we use a $\rho$ that has some correlations (even otherwise, one may use a portfolio to mitigate risk but correlations allow us to target specific risks). There is an alternative definition of the payoff function, where it is $f(x)$ minus the contract price, i.e. a profit/loss function. The problem with this is that not every function can be a profit/loss function. But it often does make sense, and in general, a profit/loss function is more versatile than a payoff function (i.e. can be defined sensibly for any asset, which may not be possible with the payoff function with assets that have buying/selling at various points in time). (Think about how one may define a payoff function for shorting (shorting traditionally isn't considered a derivative because it isn't a contract, but I think that's an arbitrary distinction) -- the analog of the "contract price" is then the negative price you "buy" it at (i.e. the negative of the price you initially sell the stock you borrowed), and the negative value that you eventually "get" (i.e. the negative of the price you eventually sell it at) is the payoff function. So the payoff function is $-x$, and is indeed the reflection in the asset value axis of the payoff for a long. Check that the profit/loss functions are also reflections, albeit the interest on the stock you borrowed.) It's crucial to get some practice constructing various financial derivatives, i.e. constructing derivatives that have a given payoff function (using the first definition). $$f(x)=(a-x)I(x<a)$$ Such a function would be a useful alternative to shorting, as it doesn't allow arbitrary losses. The whole discontinuity of the function really suggests to me a fundamental change in behaviour at the point $x=a$ -- like you just don't make the trade if $x\ge a$. This decision can only be made once the final price is discovered, so you must have bought a contract that gave you the option to make a transaction: that transaction must be selling, it must be executed after the price is realised, but it must be at price $a$, which is initially fixed. This is called a put option -- you buy the option to sell a stock at a pre-decided price. To exercise the option, you instantly buy the stock and sell it at that pre-decided price. Obviously, this price matters -- otherwise, you would be getting a guaranteed nonnegative profit. This is really equivalent to insurance. (Verify that the payoff diagram of the seller of the put option is the negative of that of what's above.) There's a natural analog of this notion that reduces risks with longing. Once again, we see that there's a fundamental change of behavior if the price drops below $x=a$ -- you just don't complete the transaction. So you've bought an option to do something. Well, you need to sell something to make money, but the intercept of the graph suggests that you're also buying the asset, albeit at a fixed price. So this is a call option -- you buy the option to buy a stock at a pre-decided price. To exercise the option, you exercise it, then immediately sell the stock you bought to make your profit. (Once again, the payoff diagram is a bit misleading and suggests that this is strictly worse than just buying a stock -- remember that the cost of a stock is the entire original stock price, while the cost of the call option is much smaller. These costs are not integrated into the payoff diagrams, but are into the profit/loss diagrams.) Essentially, call and put options allow you to work on hindsight. One might wonder that a call option is perhaps not as useful as a put option -- there's not much to insure with longing, right? (compared to shorting) Perhaps, but there are certain other uses of call options that work together with put options in an interesting way, as we will soon see. Written by Abhimanyu Pallavi Sudhir on November 07, 2019 Tags -- call options, derivatives, finance, options, portfolio, put options, shorting, stocks Copyright © 2016-2019, The Winding Number by Abhimanyu Pallavi Sudhir. Picture Window theme. Powered by Blogger.
CommonCrawl
Robotics and Biomimetics Implementation of Q learning and deep Q network for controlling a self balancing robot model MD Muhaimin Rahman ORCID: orcid.org/0000-0002-7430-51361, S. M. Hasanur Rashid1 & M. M. Hossain2 Robotics and Biomimetics volume 5, Article number: 8 (2018) Cite this article In this paper, the implementations of two reinforcement learnings namely, Q learning and deep Q network (DQN) on the Gazebo model of a self balancing robot have been discussed. The goal of the experiments is to make the robot model learn the best actions for staying balanced in an environment. The more time it can remain within a specified limit, the more reward it accumulates and hence more balanced it is. We did various tests with many hyperparameters and demonstrated the performance curves. Control system is one of the most critical aspects of Robotics Research. The Gazebo is one of the most robust multi-robot simulators at present. The ability to use the Robot Operating System (ROS) with Gazebo makes it more powerful. However, there is very few documentation on how to use ROS and Gazebo for Controllers development. In our previous paper, [1], we attempted to demonstrate and document the use of PID, Fuzzy logic and LQR controllers using ROS and Gazebo on a self-balancing robot model. Later on, we have worked on Reinforcement learning. In this paper, we have the implementation of Q Learning and Deep Q Network on the same model. The paper is structured as follows. "Related works" section shows the related works on the subject. "Robot model" section discusses the Robot Model. "Reinforcement learning methods as controllers" section shows the implementation of Q Learning and DQN as controllers. Finally, "Conclusion and future work" section is the conclusion. Lei Tai and Ming Liu [2] had worked on Mobile Robots Exploration using CNN based reinforcement learning. They trained and simulated a TurtleBot on Gazebo to develop an exploration strategy based on raw sensor value from the RGB-D sensor. The company ErleRobotics have extended OpenAI environment to Gazebo [3]. They have deployed Q-learning and Sarsa algorithms for various exploratory environments. Loc Tran et al. [4] developed a training model for an Unmanned aerial vehicle to explore with static obstacles in both Gazebo and the real world, but their proposed Reinforcement learning is unclear from the paper. Volodymyr Sereda [5] used Q-learning on a custom Gazebo model using ROS in exploration strategy. Rowan Border [6] used Q-learning with neural network presentation for robot search and rescue using ROS and Turtlebot. Robot model The robot model is described in the paper [1]. It has one chassis and two wheels. The task of the model is to keep the robot balanced, i.e., keeping its pitch angle in between ± 5°. The more it remains in between the limits, the more it gets the reward. Figure 1 shows the block diagram and Fig. 2 shows the Gazebo model of the self-balancing robot. Simple block diagram of the model Gazebo model The robot's IMU sensor measures the roll, pitch and yaw angles of the chassis every second and sends them to the controller. The controller then calculates optimum action value to make the chassis tilt according to set point. Figure 3 shows the control system of the robot. Controller block diagram Reinforcement learning methods as controllers Previously, we worked on traditional Controllers like PID, Fuzzy PD, PD+I & LQR [1]. The biggest problem with those methods is that they need to be tuned manually. So, reaching optimal values of controllers depends on many trials and errors. Many times optimum values aren't achieved at all. The biggest benefit of reinforcement learning algorithms as controllers is that the model tunes itself to reach the optimum values. The following two sections discuss Q Learning and Deep Q Network (Additional file 1). Q learning Q-learning was developed by Christopher John Cornish Hellaby Watkins [7]. According to Watkins, "it provides agents with the capability of learning to act optimally in Markovian domains by experiencing the consequences of actions, without requiring them to build maps of the domains" [8]. In a Markovian domain, Q function—the model to be generated using the algorithm—calculates the expected utility for a given finite state s and every possible finite action a. The agent—which is the robot in this case—selects the optimum action a having the highest value of Q(s, a) , this action choosing rule is also called Policy [8]. Initially, the Q(s, a) function values are assumed to be zero. After every training step, the values get updated according to the following equation (Additional file 2) $$\begin{aligned} Q(s,a_t) \leftarrow Q(s,a_t)+ \alpha (r+\gamma maxQ(s_{t+1},a)) \end{aligned}$$ The objective of the model in our project is to keep it within limits, i.e., ± 5°. At first, the robot model, Q matrix, policy \(\pi\) are initialized. There are some interesting points to make. The states are not finite. Within the limit range, hundreds and thousands of pitch angles are possible. Having thousands of columns is not possible. So, we discretized the state values into 20 state angles from − 10° to 10°. For action value, we chose ten different velocities and they are [− 200, − 100, − 50, − 25, − 10, 10, 25, 50, 100, 200] ms−1. The Q matrix had 20 columns, each column representing a state and ten rows each representing every action. Initially, the Q-values were assumed to be 0, and some random actions were specified for every state in the policy \(\pi\). We trained for 1500 episodes, each episode having 2000 iterations. At the beginning of each episode, the simulation refreshed. Whenever the robot's state exceeded the limit, it was penalized by assigning a reward to \(-100\). The Q Table is updated at each step according to Eq. 1. The Algorithm 1 shows the full algorithm. (Additional file 3) Result and discussion The simulation was run for three different \(\alpha\) values (0.7, 0.65, 0.8), with \(\gamma\) value of (0.999). Figure 4 shows the rewards vs episodes for those \(\alpha\)s. It is evident that the robot couldn't earn the targeted amount of rewards within the training period for those learning rates. We see that, for the \(\alpha\) values of 0.7 and 0.8, the robot reached at maximum possible accumulated rewards, 2000, within 400 episodes. The curve with the \(\alpha\) value of 0.7 is less stable compared to that of 0.8. However, The curve with the \(\alpha\) value of 0.65 never achieved the maximum accumulated reward (Additional file 4). Rewards for different \(\alpha\) Deep Q network (DQN) Mnih et al. [9] first used Deep Learning as a variant of Q Learning algorithm to play six games of Atari 2600, which outperformed all other previous algorithms. In their paper, two unique approaches were used. Experience Replay Derivation of Q Values in one forward pass (Additional file 5). The technique of Experience Replay, experiences of an agent, i.e., \((state, reward, action,state_{new})\) are stored over many episodes. In the learning period, after each episode, random batches of data from experience are used to update the model [9]. According to the paper, there are several benefits to such an approach (Additional file 6). They are- It allows greater data efficiency as each step of experience can be used in many weight updates Randomizing batches break correlations between samples Behaviour distribution is averaged over many of its previous states. Derivation of Q values in one forward pass In the classical Q learning approach, one has to give state and action as an input resulting in Q value for that state and action. Replicating this approach in Neural Network is problematic as one has to give state and action for each possible action of the agent to the Model (Additional file 7). It will lead to many forward passes in the same model. Instead, they designed the model in such a way that it will predict Q values for each action for a given state. As a result, only one forward pass is required. Figure 5 shows a sample architecture for one state with two actions Sample deep Q network architecture Implementation on the robot model The implementation of the DQN on our Robot model is similar to Q Learning Method. However, there are some exceptions. At first, a model was initialized instead of Initializing Q matrix. In the \(\epsilon\) greedy policy, instead of choosing the action based on policy \(\pi\), Q values were calculated according to the model. At the end of every episode, the model was trained using random mini batches of experience. At first, an architecture with two hidden Relu layers of 20 units was selected whereas the last layer was a Linear Dense layer with ten units. With the \(\gamma\) of 0.999 and \(\alpha\) of (0.65, 0.7, 0.8) . Algorithm 2 shows the DQN algorithm as implemented on the robot model. The architecture of the model is simple. It is a Multi-layer perceptron network, with two hidden layers of 40 nodes. the last layer is of 10 output nodes. The activation function we used in every hidden layer is Rectified Linear Unit. The last layer has linear activation function (Fig. 6). Schematic diagram of DQN architecture used From Fig. 7, we see that the total rewards for \(\alpha\) (0.65) are significantly higher. It starts approximately from 1750 and reaches the maximum total rewards, 2000 within the 200th episode. However, the accumulated rewards with \(\alpha\) values of 0.7and 0.8 are meager. They have accrued rewards approximately 50–60 for the whole time. Later, the architecture was changed to 2 hidden layers of 40 Relu Units where the value of \(\gamma\) was selected to be 0.9. Figure 8 shows that both curves reached the highest accumulated rewards within 200 episodes in the new configuration. Rewards for three different \(\alpha\)s with \(\gamma\) 0.999 Rewards versus episodes for new architecture Performance curve for PID, fuzzy logic and LQR Comparison to traditional methods In our previous paper, [1], we evaluated the performance of PID, Fuzzy Logic and LQR on a self-balancing robot model and compared among those controllers. Figure 9 shows the performance curves for PID, Fuzzy P, LQR and DQN. It shows that LQR and Fuzzy controllers were not so stable like PID, although we had to tune all of them manually. The DQN performance curves are more stable than fuzzy P and LQR.But less stable than PID. There are two reasons behind being less stable can be, that the PID algorithm is giving continuous action values, while our architecture is designed for discrete values. The second reason is the reward function for this architecture is to limit the pitch angle between − 5° and 5°. Narrowing down that range will help the architecture to perform better (Additional file 8). Conclusion and future work The implementation of Q Learning and Deep Q Network as a controller in the Gazebo Robot Model was shown in this paper. It showed the details of the algorithms. However, some further improvements can be made. Like, It was assumed that the robot would work on Markovian State space, which generally not the case. In general, Inverted pendulum models are Non-markovian models. So there must exist some dependencies among the states. So In future, Recurrent Neural Network has a great possibility. Moreover, ten predefined values of velocities for action were used. In the real world application, action values have continuous range. So for more complex models, this method may not work. In that case, deep reinforcement learning algorithms with continuous action space like Actor-Critic Reinforcement Learning algorithm [10] can be used. Finally, this work should be improved toward real-world scenarios. Rahman MDM, Rashid SMH, Hassan KMR, Hossain MM. Comparison of different control theories on a two wheeled self balancing robot. In: AIP conference proceedings, 1980; 1: 060005. 2018. https://aip.scitation.org/doi/abs/10.1063/1.5044373. Tai L, Liu M. Mobile robots exploration through cnn-based reinforcement learning. Robot. Biomim. 2016;3(1):24. https://doi.org/10.1186/s40638-016-0055-x. Zamora I, Lopez NG, Vilches VM, Cordero AH. Extending the openai gym for robotics: a toolkit for reinforcement learning using ROS and gazebo. CoRR, vol. abs/1608.05742, 2016. http://arxiv.org/abs/1608.05742. Tran LD, Cross CD, Motter MA, Neilan JH, Qualls G, Rothhaar PM, Trujillo A, Allen BD. Reinforcement learning with autonomous small unmanned aerial vehicles in cluttered environments. In: 15th AIAA aviation technology, integration, and operations conference, Jun 2015. https://doi.org/10.2514/6.2015-2899. Sereda V. Machine learning for robots with ros, Master's thesis, Maynooth University. Maynooth, Co. Kidare, 2017. Border R. Learning to save lives: Using reinforcement learning with environment features for efficient robot search. White Paper, University of Oxford, 2015. Watkins CJ. Learning from delayed rewards. Ph.D. dissertation, Kings's Collenge, London, May 1989. Watkins CJCH, Dayan P. Q-learning, Machine Learning, vol. 8, no. 3, pp. 279–292, May 1992. https://doi.org/10.1007/BF00992698. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller MA. Playing atari with deep reinforcement learning, CoRR, vol. abs/1312.5602, 2013. [Online]. Available: http://arxiv.org/abs/1312.5602. Mnih V, Badia AP, Mirza M, Graves A, Lillicrap T, Harley T, Silver D, Kavukcuoglu K. Asynchronous methods for deep reinforcement learning, In: Proceedings of The 33rd International Conference on Machine Learning, ser. In: Proceedings of Machine Learning Research, Balcan MF, Weinberger KQ, (eds), vol. 48. New York, New York, USA: PMLR, 20–22 Jun 2016, pp. 1928–1937. http://proceedings.mlr.press/v48/mniha16.html. The original project is this paper and [1]. The contributions of MDMR is the simulations and writing of this paper. The contributions of SMHR and MMH is reviewing both papers. All authors read and approved the final manuscript. The paper has no external source of funding. Department of Mechanical Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh MD Muhaimin Rahman & S. M. Hasanur Rashid Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh M. M. Hossain MD Muhaimin Rahman S. M. Hasanur Rashid Correspondence to MD Muhaimin Rahman. Additional file PID1:Performance Values of PID with Kp 100, Ki 0.5, Kd 0.1. FuzzyPD: Performance Values of Fuzzy PD control system. FuzzyPD+I: Performance Values of Fuzzy PD+ I control system. LQ1R1: Performance values of LQR control system with Q 10 and R 100. LQ2R2: Performance values of LQR control system with Q 100 and R 1000. PID2: Performance values of PID control system Kp 50, Ki 0.8 and kd 0.05. PID3: Performance values of PID control system Kp 25, Ki 0.8 and kd 0.1. P1: Performance values of P control system Kp 50000. Rahman, M.M., Rashid, S.M.H. & Hossain, M.M. Implementation of Q learning and deep Q network for controlling a self balancing robot model. Robot. Biomim. 5, 8 (2018). https://doi.org/10.1186/s40638-018-0091-9
CommonCrawl
Canard trajectories in 3D piecewise linear systems An equivalent characterization of the summability condition for rational maps October 2013, 33(10): 4579-4594. doi: 10.3934/dcds.2013.33.4579 Statistical stability for multi-substitution tiling spaces Rui Pacheco 1, and Helder Vilarinho 1, Universidade da Beira Interior, Rua Marquês d'Ávila e Bolama, Covilhã, 6200-001, Portugal, Portugal Received July 2012 Revised January 2013 Published April 2013 Given a finite set $\{S_1\dots,S_k \}$ of substitution maps acting on a certain finite number (up to translations) of tiles in $\mathbb{R}^d$, we consider the multi-substitution tiling space associated to each sequence $\bar a\in \{1,\ldots,k\}^{\mathbb{N}}$. The action by translations on such spaces gives rise to uniquely ergodic dynamical systems. In this paper we investigate the rate of convergence for ergodic limits of patches frequencies and prove that these limits vary continuously with $\bar a$. Keywords: tiling spaces, invariant measures, dynamical systems, Multi-substitutions, statistical stability.. Mathematics Subject Classification: Primary: 37A15, 37A25, 52C2. Citation: Rui Pacheco, Helder Vilarinho. Statistical stability for multi-substitution tiling spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4579-4594. doi: 10.3934/dcds.2013.33.4579 F. Durand, Linearly recurrent subshifts have a finite number of non-periodic subshift factors, Ergodic Theory Dynamical Systems, 20 (2000), 1061-1078. doi: 10.1017/S0143385700000584. Google Scholar S. Ferenczi, Rank and symbolic complexity subshift factors, Ergodic Theory Dynamical Systems, 16 (1996), 663-682. doi: 10.1017/S0143385700009032. Google Scholar N. P. Frank, A primer of substitution tilings of the Euclidean plane, Expositiones Mathematicae, 26 (2008), 295-326. doi: 10.1016/j.exmath.2008.02.001. Google Scholar N. P. Frank and L. Sadun, Fusion: A general framework for hierarchical tilings of $\mathbbR^d$,, preprint, (). Google Scholar F. Gähler and G. Maloney, Cohomology of one-dimensional mixed substitution tiling spaces,, preprint, (). Google Scholar C. P. M. Geerse and A. Hof, Lattice gas models on self-similar aperiodic tilings, Rev. Math. Phys., 3 (1991), 163-221. doi: 10.1142/S0129055X91000072. Google Scholar W. H. Gottschalk, Orbit-closure decomposition and almost periodic properties, Bull. Amer. Math. Soc., 50 (1944), 915-919. doi: 10.1090/S0002-9904-1944-08262-1. Google Scholar Grünbaum and G. C. Shephard, "Tilings and Patterns," Freeman, New York, 1986. Google Scholar J.-Y. Lee, R. V. Moody and B. Solomyak, Pure point dynamical and diffraction spectra, Ann. Henri Poincaré, 3 (2002), 1003-1018. doi: 10.1007/s00023-002-8646-1. Google Scholar R. Pacheco and H. Vilarinho, Metrics on tiling spaces, local isomorphism and an application of Brown's lemma,, preprint, (). doi: 10.1007/s00605-013-0484-3. Google Scholar C. Radin and M. Wolff, Space tilings and local isomorphism, Geometriae Dedicata, 42 (1992), 355-360. doi: 10.1007/BF02414073. Google Scholar E. A. Robinson, Jr., Symbolic dynamics and tilings of $\mathbbR^d$, Proc. Sympos. Appl. Math. Amer. Math. Soc., 60 (2004), 81-119. Google Scholar D. Ruelle, "Statistical Mechanics: Rigorous Results," W. A. Benjamin, Inc., New York - Amsterdam, 1969. Google Scholar B. Solomyak, Dynamics of self-similar tilings, Ergodic Theory and Dynamical Systems, 17 (1997), 695-738. Errata: Ergodic Theory and Dynamical Systems, 19 (1999), 1685. doi: 10.1017/S0143385797084988. Google Scholar Younghwan Son. Substitutions, tiling dynamical systems and minimal self-joinings. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4855-4874. doi: 10.3934/dcds.2014.34.4855 Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285 Xin Li, Wenxian Shen, Chunyou Sun. Invariant measures for complex-valued dissipative dynamical systems and applications. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2427-2446. doi: 10.3934/dcdsb.2017124 Grzegorz Łukaszewicz, James C. Robinson. Invariant measures for non-autonomous dissipative dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4211-4222. doi: 10.3934/dcds.2014.34.4211 Siniša Slijepčević. Stability of invariant measures. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1345-1363. doi: 10.3934/dcds.2009.24.1345 Giovanni Russo, Fabian Wirth. Matrix measures, stability and contraction theory for dynamical systems on time scales. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021188 Giovanni Panti. Dynamical properties of logical substitutions. Discrete & Continuous Dynamical Systems, 2006, 15 (1) : 237-258. doi: 10.3934/dcds.2006.15.237 Tomás Caraballo, Peter E. Kloeden, José Real. Invariant measures and Statistical solutions of the globally modified Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 761-781. doi: 10.3934/dcdsb.2008.10.761 Zhiming Li, Lin Shu. The metric entropy of random dynamical systems in a Hilbert space: Characterization of invariant measures satisfying Pesin's entropy formula. Discrete & Continuous Dynamical Systems, 2013, 33 (9) : 4123-4155. doi: 10.3934/dcds.2013.33.4123 Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639 Zhang Chen, Xiliang Li, Bixiang Wang. Invariant measures of stochastic delay lattice systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3235-3269. doi: 10.3934/dcdsb.2020226 Victor Magron, Marcelo Forets, Didier Henrion. Semidefinite approximations of invariant measures for polynomial systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6745-6770. doi: 10.3934/dcdsb.2019165 Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002 P.E. Kloeden, Pedro Marín-Rubio, José Real. Equivalence of invariant measures and stationary statistical solutions for the autonomous globally modified Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 785-802. doi: 10.3934/cpaa.2009.8.785 Alfredo Marzocchi, Sara Zandonella Necca. Attractors for dynamical systems in topological spaces. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 585-597. doi: 10.3934/dcds.2002.8.585 Xiaoming Wang. Numerical algorithms for stationary statistical properties of dissipative dynamical systems. Discrete & Continuous Dynamical Systems, 2016, 36 (8) : 4599-4618. doi: 10.3934/dcds.2016.36.4599 Kaizhi Wang. Action minimizing stochastic invariant measures for a class of Lagrangian systems. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1211-1223. doi: 10.3934/cpaa.2008.7.1211 Betseygail Rand, Lorenzo Sadun. An approximation theorem for maps between tiling spaces. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 323-326. doi: 10.3934/dcds.2011.29.323 Marcy Barge, Sonja Štimac, R. F. Williams. Pure discrete spectrum in substitution tiling spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 579-597. doi: 10.3934/dcds.2013.33.579 Vadim S. Anishchenko, Tatjana E. Vadivasova, Galina I. Strelkova, George A. Okrokvertskhov. Statistical properties of dynamical chaos. Mathematical Biosciences & Engineering, 2004, 1 (1) : 161-184. doi: 10.3934/mbe.2004.1.161 Rui Pacheco Helder Vilarinho
CommonCrawl
A non-linear model of hydrogen production by Caldicellulosiruptor saccharolyticus for diauxic-like consumption of lignocellulosic sugar mixtures Johanna Björkmalm ORCID: orcid.org/0000-0002-5758-41371,2, Eoin Byrne2, Ed W. J. van Niel2 & Karin Willquist1 Biotechnology for Biofuels volume 11, Article number: 175 (2018) Cite this article Caldicellulosiruptor saccharolyticus is an attractive hydrogen producer suitable for growth on various lignocellulosic substrates. The aim of this study was to quantify uptake of pentose and hexose monosaccharides in an industrial substrate and to present a kinetic growth model of C. saccharolyticus that includes sugar uptake on defined and industrial media. The model is based on Monod and Hill kinetics extended with gas-to-liquid mass transfer and a cybernetic approach to describe diauxic-like growth. Mathematical expressions were developed to describe hydrogen production by C. saccharolyticus consuming glucose, xylose, and arabinose. The model parameters were calibrated against batch fermentation data. The experimental data included four different cases: glucose, xylose, sugar mixture, and wheat straw hydrolysate (WSH) fermentations. The fermentations were performed without yeast extract. The substrate uptake rate of C. saccharolyticus on single sugar-defined media was higher on glucose compared to xylose. In contrast, in the defined sugar mixture and WSH, the pentoses were consumed faster than glucose. Subsequently, the cultures entered a lag phase when all pentoses were consumed after which glucose uptake rate increased. This phenomenon suggested a diauxic-like behavior as was deduced from the successive appearance of two peaks in the hydrogen and carbon dioxide productivity. The observation could be described with a modified diauxic model including a second enzyme system with a higher affinity for glucose being expressed when pentose saccharides are consumed. This behavior was more pronounced when WSH was used as substrate. The previously observed co-consumption of glucose and pentoses with a preference for the latter was herein confirmed. However, once all pentoses were consumed, C. saccharolyticus most probably expressed another uptake system to account for the observed increased glucose uptake rate. This phenomenon could be quantitatively captured in a kinetic model of the entire diauxic-like growth process. Moreover, the observation indicates a regulation system that has fundamental research relevance, since pentose and glucose uptake in C. saccharolyticus has only been described with ABC transporters, whereas previously reported diauxic growth phenomena have been correlated mainly to PTS systems for sugar uptake. The need for renewable energy is ever increasing to tackle the major challenges of global warming, energy demand, and limited resources. According to statistics published by the International Energy Agency [1], just over 86% of the Total Primary Energy Supply (TPES) in 2014 was produced from fossil resources, leaving a modest 14% originating from renewable energy sources. When putting these numbers in relation with the adopted Paris Agreement in 2015, targeting to keep the global average temperature increase below the 2 °C above pre-industrial levels [2], it is evident that actions need to be taken. There are, however, positive trends in that the supply of renewable energy sources has grown faster, with an average annual rate of 2.0% since 1990, compared to the growth of the world TPES of 1.8% [1]. Hydrogen has the potential of becoming an important renewable energy carrier. Currently, hydrogen is widely used as a reducing agent in the chemical and food industry. However, using hydrogen as an energy carrier in sustainable applications is of great interest due to its potentially high efficiency of conversion to usable power, its low emissions of pollutants and high energy density [3]. Up to 96% of the world's hydrogen production is fossil based, i.e., natural gas, oil, and coal [4]. A sustainable alternative to the conventional methods for producing hydrogen is by biological methods, i.e., biohydrogen. There are four major categories in which production of biological hydrogen can be classified, namely: photofermentation of organic compounds by photosynthetic bacteria, biophotolysis of water using algae and cyanobacteria, bioelectrohydrogenesis, and fermentative hydrogen production, so-called dark fermentation, from organic wastes or energy crops [5, 6]. The latter is the focus of this study, where various sugars present in, e.g., agricultural waste like wheat straw, can be fermented by microorganisms for hydrogen production. This also addresses the challenge of converting lignocellulosic biomass to renewable energy. Lignocellulosic biomass has been previously described as "the most abundant organic component of the biosphere" with an annual production of 1–5·1013 kg and, therefore, is an attractive substrate for biofuel production [7]. Lignocellulosic biomass primarily consists of cellulose (40–60% CDW), hemicellulose (20–40%), and lignin (10–25%) [8]. Cellulose and hemicellulose can be enzymatically hydrolyzed into smaller sugar molecules. The thermophilic microorganism Caldicellulosiruptor saccharolyticus is able to produce hydrogen from lignocellulosic biomass through dark fermentation and has previously shown the potential of producing hydrogen close to the maximum theoretical yield of 4 mol hydrogen per mol hexose [9,10,11]. C. saccharolyticus is cellulolytic and can utilize a broad range of di- and monosaccharides for hydrogen production [12]. Van de Werken et al. [13] showed that C. saccharolyticus coferments glucose and xylose as it lacks catabolite repression. VanFossen et al. [14] revealed that although C. saccharolyticus co-utilizes different sugars, it has a preference for some sugars over others. Xylose was discussed as a preferred sugar over glucose and is, therefore, utilized by the microorganism to a greater extent than glucose. However, the substrate uptake kinetics was not determined and a yeast extract (YE)-supplemented medium was used [13]. By developing a mathematical model for a biological process, it is possible to describe past and predict future performances as well as gaining a deeper understanding of the physiological mechanism behind the process. The aim of this study is to present a model that describes the growth of C. saccharolyticus on lignocellulosic sugar mixtures and how the uptake rate changes when the sugars are used simultaneously or individually. Similar kinds of models have been developed [15, 16]; however, these models focus on single sugar uptake. The proposed model here builds on the one presented by Ljunggren et al. [15] by adding the consumption rates for each individual sugar in the sugar mixtures. Monod [17] first described the phenomenon of diauxic growth, where a microorganism is exposed to two substrates and first consumes the substrate that supports the most efficient growth rate. Several models have been developed in this area [18, 19] describing how to capture the subsequent uptake of sugars when multiple sugars are present. This phenomenon can be modeled using a cybernetic approach to whether a particular enzyme, needed for a specific sugar to be metabolized, is upregulated or not. This paper describes the development of a substrate-based uptake model using Monod-type kinetics including biomass growth, product formation, liquid-to-gas mass transfer, and enzyme synthesis with Hill kinetics, with C. saccharolyticus as model organism. The model presented in this paper takes into consideration the usage of different sugars, including hexoses, i.e., glucose, and pentoses, i.e., xylose and arabinose. The model describes the different sugar uptakes individually, exemplifying the rate at which each sugar is consumed when C. saccharolyticus grows on the sugar mixtures and on the individual sugars, respectively. Strains and cultivation medium Caldicellulosiruptor saccharolyticus DSM 8903 was obtained from the Deutsche Sammlung von Mikroorganismen und Zellkulturen (Braunschweig, Germany). Sub-cultivations were conducted in 250 mL serum flasks with 50 mL modified DSM 640 media [20]. The carbon source of each cultivation corresponded to that of the subsequent fermentor cultivation. The 1000× vitamin solution and modified SL-10 solution were prepared according to [20] and [21], respectively. All bioreactor experiments used a modified DSM 640 medium with the exclusion of yeast extract according to Willquist and van Niel [20]. To quantify the kinetics of xylose and glucose uptake and the effect of when the sugars were mixed in pure and industrial medium, the growth and hydrogen production was monitored in four different cases, where the total sugar concentration in the medium was fixed to 10 g/L. Cultivations were performed using 10 g/L glucose (Case 1), 10 g/L xylose (Case 2), a sugar mixture (Case 3), and wheat straw hydrolysate (Case 4). In Case 4, a 9% solution of wheat straw hydrolysate was used corresponding to approximately 10 g/L total sugars. In Case 3, the sugar mixture contained pure sugars with the same concentration as the wheat straw hydrolysate (6.75 g/L glucose, 3.06 g/L xylose, and 0.173 g/L arabinose). The total sugar concentrations at the start of the fermentation included the sugar added as described above and the additional sugar added from the inoculum, which varied slightly in the different conditions. The starting sugar concentration was, therefore, as follows: Case 1, 12.11 ± 0.09 g/L glucose; Case 2, 10.96 ± 0.20 g/L xylose; Case 3, 8.69 ± 0.12 g/L glucose, 3.38 ± 0.19 g/L xylose, and 0.38 ± 0.01 g/L arabinose; Case 4, 7.31 ± 0.07 g/L glucose, 3.36 ± 0.06 g/L xylose, and 0.34 ± 0.00 g/L arabinose. Fermentor setup Batch cultivations were performed in a jacketed, 3-L fermentor equipped with ADI 1025 Bio-Console and ADI 1010 Bio-Controller (Applikon, Schiedam, The Netherlands). A working volume of 1 L was used for cultivations and the pH was maintained at optimal conditions 6.5 ± 0.1 at 70 °C by automatic titration with 4 M NaOH. The temperature was thermostatically kept at 70 ± 1 °C. Stirring was maintained at 250 rpm and nitrogen was sparged through the medium at a rate of 6 L/h. Sparging was initiated 4 h after inoculation and was continued throughout the cultivation. A condenser cooled with water at 4 °C was utilized to prevent evaporation of the medium. Samples were collected at regular time intervals for monitoring of the optical density. The supernatant from each culture was collected and stored at − 20 °C for further quantification of various sugars and organic acids. Gas samples were collected from the fermentor's headspace to quantify H2 and CO2. The sugar mixture and wheat straw hydrolysate experiments were done in triplicate. The individual sugar fermentations were done in biological duplicate. A defined medium was autoclaved in each fermentor, while anoxic solutions of cysteine HCl·H2O (1 g/L), MgCl2·6H2O (0.4 g/L), and carbon source(s) were prepared separately and were added to the fermentor before inoculation. Just after inoculation, the fermentor was closed for 4 h to allow buildup of CO2 as previously described [20] necessary to initiate growth. Optical density was determined using an Ultraspec 2100 pro spectrophotometer (Amersham Biosciences) at 620 nm. Sugars, organic acids, hydroxymethyl furfural (HMF), and furfural were detected using HPLC (Waters, Milford, MA, USA). For the quantification of organic acids, an HPLC equipped with an Aminex HPX-87H ion-exchange column (Bio-Rad, Hercules, USA) at 60 °C and 5 mM H2SO4 as mobile phase was used at a flow rate of 0.6 mL/min. Glucose, xylose, and arabinose quantification was conducted using an HPLC with a Shodex SP-0810 Column (Shodex, Japan) with water as a mobile phase at a flow rate of 0.6 mL/min. CO2 and H2 were quantified with a dual channel Micro-GC (CP-4900; Varian, Micro-gas chromatography, Middelburg, The Netherlands), as previously described [21]. Mathematical model description The model developed for C. saccharolyticus in this study takes into account the kinetics of biomass growth, consumption of glucose, xylose and arabinose, and formation of the products acetate, hydrogen, and carbon dioxide. Furthermore, the model includes liquid-to-gas mass transfer of hydrogen and carbon dioxide as well as the equilibrium between carbon dioxide, bicarbonate (HCO3−) and carbonate (CO32−). The model is developed on a cmol basis. The formation of lactate was excluded to reduce the complexity of the model, as it constituted to less than 5% of the total product in the sugar mixture fermentations. In addition, inhibition due to high aqueous H2 concentration and high osmolarity was not included in the model to reduce the number of unknown parameters. This is motivated by the fact that the focus of this study is mainly on the consumption behavior of C. saccharolyticus on the different sugars. The model is constructed with a similar nomenclature and setup as in the anaerobic digestion model no 1 (ADM1) described by Batstone et al. [22] and was implemented in MATLAB R2015b (Mathworks, USA). The following biochemical degradation reactions are the basis for the model (Eqs. 1, 2). Biomass formation from sugar [23]: $$ {\text{Sugar }} \mathop \to \limits^{{\rho_{1} }} Y_{X} {\text{CH}}_{1.62} {\text{O}}_{0.46} {\text{N}}_{0.23} {\text{S}}_{0.0052} {\text{P}}_{0.0071} . $$ Reaction 1 is not balanced, since there were elements in the fermentation medium that were not included in the model, i.e., cysteine. The value of the yield factor Y X is calculated from the data of the batch fermentations. It is assumed that nitrogen, sulfur, and phosphorus are in excess in the media and, therefore, are not included as separate entities in the mathematical model. Sugar degradation to product formation by C. saccharolyticus in cmol: $$ {\text{CH}}_{ 2} {\text{O}} + {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}{\text{H}}_{ 2} {\text{O}} \mathop \to \limits^{{\rho_{2} }} {\raise0.7ex\hbox{${ 2}$} \!\mathord{\left/ {\vphantom {{ 2} 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}{\text{CH}}_{ 2} {\text{O}} + {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}{\text{CO}}_{ 2} + {\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}{\text{H}}_{ 2} . $$ Model inputs and initial conditions The model requires a range of input variables. The lag time was determined by calculating the intersection point between the lag phase and the exponential phase when taking the natural logarithm of the biomass concentration over time, as illustrated by Swinnen et al. [24]. Since the lag phase is dependent on the culture status before the fermentation, which was not addressed in this study, it was excluded from the experimental data when the latter were compared to model data and for initial input values for the model. The start values of the unknown state variables are listed in Table 1. The constants used in the model are presented in Table 2. Table 1 Start data of the unknown state variables in the model Table 2 Constants used in the model Mass balances for biomass growth, substrate consumption, and product formation in the liquid phase The stoichiometric relationships and mass balances of the reactants and products present in the model are displayed in Table 3. The model is supplemented with an enzyme, E2, and cybernetic variables v and u as in [18], where the former controls the activity of the enzyme and the latter is the fractional allocation of a critical resource for the synthesis of the enzyme. We hypothesize that initially, there is a first enzyme system present aiding the subsequent uptake of both hexose and pentose sugars, but with a preference for the pentoses (phase I). This transporter is only available as long as pentoses are present. After depletion of the pentoses, a second enzyme system, E2, is synthesized allowing for uptake of the remaining hexose sugars by a second transporter (phase II). For the sake of convenience, we simplify the enzyme system, consisting of multiple proteins, using the word enzyme and using this abstraction also in the kinetic model. Table 3 Description of the model setup including mass balances for the sugars (glucose, xylose, and arabinose), enzyme E2, biomass, acetate, aqueous hydrogen, and aqueous carbon dioxide The mass balance for the biomass, X, is dependent on the rate of substrate consumption ρ, with Monod-type kinetics, and on the biomass decay rate, which is described with first-order kinetics, where rcd (h−1) is the cell death rate and Y x (cmol/cmol) is the yield of biomass from total sugar (Table 3). A second glucose rate equation (\( \rho_{\text{Glu, 2}} \)) is added to describe the diauxic-like growth appearance in the sugar mixture. The rate of the glucose consumption, when the pentose sugars are depleted, is dependent on enzyme E2. The rate of the enzyme synthesis, ρ E , is based on Hill kinetics, as in [19], the decay rate of the enzyme is first-order kinetics, and the third term, − 1·E2·\( \rho_{\text{Glu, 2}} , \) represents the dilution of the specific enzyme level as is described with kinetics similar to Hill, i.e., E22. The parameters km and km,2 (h−1) are the maximal uptake rates in phase I and phase II, respectively, and Ks,glu, Ks,glu,2, Ks,xyl, Ks,ara, and Ks,E2 (cmol/L) are the affinity constants for the uptake of glucose, xylose, arabinose, and synthesis of enzyme E2, respectively. Finally, α is the enzyme synthesis rate (h−1) and β is the enzyme decay rate (h−1). Acetate, hydrogen, and carbon dioxide are produced in the liquid phase. Yac (cmol/cmol), \( Y_{{{\text{H}}_{ 2} }} \) (mol/cmol) and \( Y_{{{\text{CO}}_{ 2} }} \) (cmol/cmol) represent the conversion yields of acetate, hydrogen, and carbon dioxide, respectively, from both hexose and pentose sugars. The conversion yields were fitted with experimental data from the batch fermentations. Y X was determined by the slope of the curve: total sugar vs biomass; here, only phase I was considered. Yac and \( Y_{{{\text{CO}}_{ 2} }} \) were determined by first taking the slope of the curves, total sugar vs acetate, and total sugar vs carbon dioxide, and then, the actual yields were calculated according to the following equation: $$ Y_{\text{Ac}} = \frac{{Y_{\text{Ac, curve slope}} }}{{1 - Y_{X} }} . $$ When \( Y_{{{\text{H}}_{ 2} }} \) was calculated the same way as in Eq. 3, it gave a too high conversion yield. To obtain a more accurate yield, the effects of liquid-to-gas mass transport were considered and \( Y_{{{\text{H}}_{ 2} }} \) was instead determined as follows: $$ Y_{{{\text{H}}_{ 2} }} = \frac{{{\text{H}}_{{ 2 , {\text{end}}}} - {\text{H}}_{{ 2 , {\text{start}}}} }}{{{\text{Tot sugar}}_{\text{start}} - {\text{Tot sugar}}_{\text{end}} }}. $$ Acid–base reactions The acid–base reaction considered in the model is that of carbon dioxide, bicarbonate, and carbonate formation. \( \rho_{{{\text{AB,CO}}_{ 2} }} \) in Table 4 describes the rate of formation of bicarbonate and carbonate. Table 4 Kinetic rate equation for the acid–base reaction CO2,sol is the sum of the ionic species, \( {\text{HCO}}_{3}^{ - } \) and CO32− and Eq. 5 gives the differential equation for CO2,sol: $$ \frac{{d{\text{CO}}_{{ 2 , {\text{sol}}}} }}{{{\text{d}}t}} = \rho_{{{\text{AB,CO}}_{ 2} }} . $$ Liquid-to-gas mass transfer and mass balances for product formation Hydrogen and carbon dioxide are produced in the liquid phase and then transferred to the gas phase via liquid-to-gas mass transport. \( \rho_{{t,{\text{H}}_{2} }} \) describes the mass transfer rate of hydrogen and \( \rho_{{t,{\text{CO}}_{2} }} \) is the mass transfer rate of carbon dioxide (Table 5). \( p_{{{\text{gas,H}}_{ 2} }} \) and \( p_{{{\text{gas,CO}}_{ 2} }} \) (in atm then converted to Pa) are the partial pressures of H2 and CO2, respectively. Table 5 Liquid-to-gas mass transfer processes The expression for the mass balances describing the gaseous products can be described as in Eqs. 6, 7, where \( q_{\text{gas}} \) (L/h) is the total gas flow, and Vliq and Vgas (L) are the liquid and the gas volumes, respectively: $$ \frac{{{\text{dH}}_{{ 2 , {\text{g}}}} }}{{{\text{d}}t}} = \frac{{V_{\text{liq}} }}{{V_{\text{gas}} }}\cdot\rho_{{t,{\text{H}}_{ 2} }} + \left( { - {\text{H}}_{{ 2 , {\text{g}}}} \cdot \frac{{q_{\text{gas}} }}{{V_{\text{gas}} }} } \right) $$ $$ \frac{{{\text{dCO}}_{{2,{\text{g}}}} }}{{{\text{d}}t}} = \frac{{V_{\text{liq}} }}{{V_{\text{gas}} }}\cdot\rho_{{t , {\text{CO}}_{2} }} + \left( { - {\text{CO}}_{{ 2 , {\text{g}}}} \cdot \frac{{q_{\text{gas}} }}{{V_{\text{gas}} }} } \right). $$ A sensitivity analysis can identify parameters that have great effect on the model output. The sensitivity analysis was done based on the OFAT approach, i.e., one-factor-at-at-time [25]. The chosen parameter was altered with a factor δ, as described in [26], to see the effect on the different state variable output result, as in the following equation: $$ \varGamma_{i,j} = \frac{{\left( {y_{i} \left( {\theta_{j} } \right) - y_{i} \left( {\theta_{j} + \delta \cdot \theta_{j} } \right)} \right)/y_{i} (\theta_{j} )}}{\delta }, $$ where Γ i,j is the sensitivity of state variable i with respect to model parameter j in each timepoint of the Matlab simulation. Furthermore, y i (θ j ) is the value of state variable i in regard to parameter j and \( y_{i} \left( {\theta_{j} + \delta \cdot \theta_{j} } \right) \) is the value of state variable i when parameter j has been altered with a factor δ. The parameters that were included in the sensitivity analysis were km, km,2, Ks,glu, Ks,glu,2, Ks,xyl, Ks,ara, Ks,E2, α, n, rcd, and kLaH2 and the state variables that were considered were Glu, Xyl, Ara, Ac, X, and H2. The presented sensitivity data of one parameter in regards to a specific state variable were calculated as the average of Γ i,j . Model calibration To get a better fit to the experimental data, the model parameters were calibrated using the knowledge that was revealed in the sensitivity analysis. This was done with the function lsqcurvefit in MATLAB which uses a least square method to find the right parameter value for a non-linear curve fitting by seeking to find coefficients x that solve the problem in the following equation: $$ \mathop {\text{min} }\limits_{x} \left\| F\left( {x,x{\text{data}}} \right) - y{\text{data}}\right\|_{2}^{2} = \mathop {\text{min} }\limits_{x} \mathop \sum \limits_{i} \left( {F\left( {x,x{\text{data}}_{i} } \right) - y{\text{data}}_{i} } \right)^{2} $$ given the input data xdata and the observed output ydata, where xdata and ydata are matrices or vectors and F(x,xdata) is a matrix-valued or vector-valued function of the same size as ydata. The lsqcurvefit function starts at x0 and finds coefficient, i.e., parameter x, to best fit the non-linear function fun(x,xdata) to the data ydata: $$ x = lsqcurvefit({\text{fun}},x0,x{\text{data}},y{\text{data}}). $$ The uncertainties of the calibrated parameters were assessed by calculating the confidence interval. This was done with the function nlparci in MATLAB which computes the 95% confidence intervals for the non-linear least square parameters estimated. Growth profiles on the various sugars The growth profiles of the single sugar experiments (glucose; Case 1 and xylose; Case 2), sugar mixture experiments (Case 3) and wheat straw hydrolysate experiments (Case 4) are presented in Fig. 1a–d. Glucose is consumed approx. two times faster when used as sole substrate (Case 1) than in the sugar mixtures (Cases 3 and 4). Xylose, on the other hand, is consumed approx. two times slower when used as sole substrate and is completely consumed after approx. 60 h compared to around 20 h when co-fermented with other sugars (Cases 3 and 4; Fig. 1c, d). The highest production rate of acetate and hydrogen occurred around 20 h both in the sugar mixture and in the wheat straw hydrolysate fermentations. Lactate was formed just after 20 (Case 3) and 30 h (Case 4) reaching in total 0.015 and 0.014 cmol/L, respectively. Fermentation profiles of Cases 1–4: a glucose experiment, b xylose experiment, c sugar mixture experiment, and d wheat straw hydrolysate experiments. The error bars indicate the standard deviation. Glu glucose, Xyl xylose, Ara arabinose, Ac acetate, Lac lactate, X biomass The calculated lag phases differed for each experiment. The lag phases of the sugar mixture experiments ranged from 9 to 11 h, whereas the lag phase of the wheat straw hydrolysate experiment was 4 h. This observation could be correlated to the richer nutrient content of wheat straw than the defined sugar mixture medium. A similar observation was found by Pawar et al. [27]. The lag phase with glucose alone was 4 h, but there was no lag phase with xylose alone. It is worth noticing though that it took more effort to initiate growth on xylose than on glucose as two out of four replicates failed, where none of the other experiments (Cases 1, 3, and 4) failed. This is due to that precautions are needed to start a culture on xylose in the absence of yeast extract, such as no sparging for several hours. The profiles of the mixed sugars indicate a biphasic growth, where the uptake of glucose decreased after xylose was depleted, but then increased again (Fig. 1c, d). The two-phased sugar uptake was more pronounced in the wheat straw hydrolysate fermentations. The behavior can be further illustrated by the hydrogen productivity and CO2 productivity (Fig. 2a, b). This observation has, to our knowledge, not been reported for Caldicellulosiruptor previously, although the transcriptomics of multiple sugar uptake have been extensively studied [13, 14]. One possible reason for this could be that many multi-sugar experimental studies on this genus have been performed on a yeast extract-supplemented medium [3]. Because yeast extract itself partly supports growth [20], it possibly masks biphasic behavior. Moreover, the initial ratio of pentose/hexose sugars was higher in those studies [14] than in the WSH used in this study. Thus, after xylose was consumed, the culture adapted to a hexose-only medium, which initiated a second phase of growth. a Hydrogen productivity and b CO2 productivity in Cases 3 and 4, sugar mixture fermentation, and wheat straw hydrolysate fermentation, respectively The emerging pattern resembles a diauxic growth behavior, which was first described by Monod [17], and is characterized by two growth phases often separated with a lag period. This normally occurs in the presence of two carbon sources, where the preferred one is consumed first by the microorganism followed by the second after a lag period [28,29,30]. However, in the case of C. saccharolyticus, both pentose and hexose sugars are consumed simultaneously, albeit with a slight preference for the former. When the pentose sugars are depleted hexose consumption continues, but in Case 4 that happened with an increased rate (Table 8). To quantify this behavior and investigate whether the theory of diauxic growth could be used to explain the observations, a kinetic model was developed consisting of two phases. In the phase I, glucose was consumed simultaneously with xylose and arabinose. Van de Werken et al. [13] concluded that growth on glucose and xylose mixtures as well as growth on the individual sugars all trigger transcription of the genes encoding a xylose-specific ABC transport system. This supports our hypothesis that glucose, xylose, and arabinose were initially transported by the same uptake system. However, when xylose was depleted, phase II starts with a new uptake system being expressed that had a higher affinity for glucose, transporting glucose at an altered rate. It is relevant to observe, however, that diauxic growth behavior is generally considered to be related to PTS systems [31,32,33]. However, according to current knowledge, C. saccharolyticus only possesses ABC transport systems [13, 14]. Still, it has been described that other transport systems can generate this diauxic growth profile. For example, in Streptomyces coelicolor and related species, the genes involved in carbon catabolite repression are PTS independent, and instead, glucose kinase is the main controlling enzyme [33]. Determination of conversion yields The calculated conversion yields from the batch experiments differ from the stoichiometric yields (Table 6). To begin with, the single sugar fermentations the calculated yields are lower than the corresponding stoichiometric yields. This is in contrast to the yields calculated for the sugar mixture experiments, except for Yac that was slightly lower. The lower yield for acetate could be due to that part of the acetate, or rather acetyl-CoA, which is used as a building block for cell mass production [34]. Table 6 Calculated carbon and redox balances plus the calculated yields of the four different experiments and their corresponding stoichiometric yields The carbon balances attained in the model were 90 and 102% with start data from the sugar mixture experiments and the WSH experiments, respectively, which are equal or close to the values calculated from the experimental data, 90 and 107%, respectively, Table 6. The higher values in the carbon balance, i.e., > 100%, for the WSH fermentations, could be due to that other carbon sources may be present, such as oligosaccharides, that are also converted to products giving a higher carbon and electron output. Dynamic simulations using benchmark parameter values [15] showed discrepancies between the experimental results and the model predictions. To further improve the dynamic simulations, a sensitivity analysis was conducted to determine the most important parameters. This was done with start values both from the sugar mixture fermentations as well as from the wheat straw hydrolysate fermentations. The change, δ, in the parameter value was set to 1% as in [35]. The sensitivity analysis allowed ranking of the parameters, which was useful for the model calibration. The most sensitive parameters, i.e., with a sensitivity value of > 1%, in regard to each of the state variables are listed in Table 7. The state variables that were affected the most by a change in parameter value were Glu and Xyl. The sensitivities of the other parameters for the different state variables were less than 1%. Table 7 Most sensitive parameters, i.e., sensitivity value > 1%, listed in descending order for each state variable that was evaluated Parameter calibration The sensitivity analysis served as a basis for the parameter calibration. The model was calibrated with data from the four different batch experiments, Cases 1–4. Start values of the state variables were taken from the experimental data (Table 1), and initial parameter values, i.e., benchmark values, were taken from the literature [15] or guesstimated, e.g., by manually fitting the curves of the data points. The calibrated parameters together with a confidence interval of 95% are given in Table 8. Some of the parameters were graphically calibrated and, therefore, are without a confidence interval. The simulations with start data from the single glucose and xylose fermentations were carried out without the diauxic-like growth additions; thus, only phase I was applied. Table 8 Parameters calibrated to experimental data The km values for Cases 3 and 4 describe the maximal simultaneous uptake rates of glucose, xylose, and arabinose (Table 8), and they are modeled with the same value for all the sugars in phase I. However, the Ks values for glucose in phase I, Ks,glu, are higher than the Ks values for xylose, Ks,xyl, which indicates a lower affinity for glucose in phase I, since xylose is present and preferred. Moreover, Ks,glu in Case 4 is 18 times higher compared to Ks,glu,2 and compared to Ks,glu in Case 3. One explanation is the greater affinity for xylose in phase I and another possible explanation is that Ks,glu in Case 4 also includes an inhibition term due to the characteristics of the wheat straw hydrolysate media, e.g., Eq. 11: $$ K_{\text{s,glu}} = K_{\text{s,glu, real}} \cdot I, $$ where I represents a competitive inhibition, Eq. 12: $$ I = 1 + \frac{{S_{I} }}{{K_{I} }} $$ with S I the concentration of the inhibitor and K I the inhibition parameter. This is possibly due to unknown inhibiting compounds in the wheat straw hydrolysate or other factors that inhibit glucose uptake in phase I in Case 4. The reason behind the competitive inhibition has not been identified, but we hypothesize the presence of oligosaccharides that might be preferably taken up instead of glucose. However, these sugars were not quantified in the HPLC analysis of WSH. The km,2 value for Case 4 is 50% lower than the corresponding value for the glucose uptake rate in Case 1. One explanation for this is that the enzymes involved in the sugar uptake in Case 4 take some time to be synthesized making glucose consumption slower in the WSH compared to the single glucose fermentation. Again, the presence of inhibiting compounds or competitive oligosaccharides could further slow down the glucose uptake rate. Furthermore, the results show that on single sugars and mineral medium, glucose uptake is approximately 35% faster than xylose uptake (Table 8). Moreover, growth of C. saccharolyticus on glucose is approx. 40% faster than on xylose (Table 9). This outcome contradicts the previous results on these two sugars in media supplemented with yeast extract (YE), where growth is faster on xylose than on glucose [13, 14]. An explanation for this observation could be that C. saccharolyticus needs other sugars (present in YE) to grow optimal on xylose. Indeed, when both sugars are present the growth on xylose is stimulated by the co-uptake of glucose. The stoichiometric relationship of glucose-to-xylose uptake rate ρ(Glucose):ρ(Xylose) was affected by the media used and is approximately 0.7 and 0.3 in phase I for growth on defined sugar mixture and wheat straw medium, respectively (data used from Fig. 1). Until xylose is depleted, the total glucose, xylose, and arabinose conversion rates, i.e., 0.54·3 h−1, are similar to that of xylose conversion in the absence of glucose, i.e., 1.58 h−1. This observation is supported by other studies with C. saccharolyticus using different sugar mixtures both with and without YE, e.g. in Willquist [36]. Xylose uptake increases if a small concentration of glucose is present or if either the fermentor is sparged with CO2 instead of N2 gas or closed, to allow buildup of HCO3− in the reactor. Table 9 Maximal specific growth rates, µmax, calculated from km, km,2, and Y x values Model prediction Comparison between the model and experimental results for the combined sugars is depicted in Table 10, and Figs. 3 and 4. The results show that a diauxic-like behavior model simulates well the experimental data of C. saccharolyticus when grown on mixtures of pentose and hexose sugars. Without the addition of a second enzyme equation as well as cybernetic variables controlling the upregulation of the enzyme, the experimental data could not be simulated. Table 10 R2 values to describe the fit between experimental data and model simulation Sugar mixture experimental data and model simulation. a Glucose (cmol/L) data and model; b xylose data and model (cmol/L); c arabinose (cmol/L) data and model; d acetate (cmol/L) data and model; e biomass (cmol/L) data and model; f enzyme, E2 (cmol/L) data and model; g hydrogen productivity (L/h/L) data and model; and h hydrogen accumulated (mol/L) data and model. Exp. data E28 experimental data E28, Exp. data E29 experimental data E29, and Exp. data E30 experimental data E30 Wheat straw hydrolysate experimental data and model simulation. a Glucose (cmol/L) data and model; b xylose data and model (cmol/L); c arabinose (cmol/L) data and model; d acetate (cmol/L) data and model; e biomass (cmol/L) data and model; f enzyme, E2 (cmol/L) data and model; g hydrogen productivity (L/h/L) data and model; and h hydrogen accumulated (mol/L) data and model. Exp. data E13 experimental data E13, Exp. data E14 experimental data E14 and Exp. data E15 experimental data E15 Table 10 shows the fitting between the experimental data and the model simulation displaying the regression analysis values. It is clear that the model is well able to describe the consumption of the different sugars as well as biomass growth, acetate formation, and accumulation of hydrogen in Cases 3 and 4. The model, without the diauxic-like additions, was better at describing the individual xylose fermentations (Case 2), rather than the individual glucose fermentations (Case 1) when it comes to biomass growth and hydrogen production (Table 10). The model only predicts a small second peak in hydrogen productivity compared to the data of the defined sugar mixture fermentations (Fig. 3g). However, the model succeeds in describing the diauxic-like behavior of the hydrogen productivity profile in the wheat straw hydrolysate fermentations (Fig. 4g). The uptake of the three sugars as well as the formation of acetate is well described by the model, both for Cases 3 and 4 (Figs. 3a–d, 4a–d). According to the simulation, the enzyme (used to describe the diauxic behavior) concentration is very low, close to zero, in the beginning, and when phase I ends, the enzyme synthesis starts and the concentration increases up to a peak, where it begins decreasing just before t = 60 h in Case 3 and somewhat earlier in Case 4 (Figs. 3f, 4f). The enzyme synthesis is dependent on the biomass concentration, which is why it follows the behavior of the latter. The two biomass growth phases are clearly displayed in Case 4 and expressed by the model (Fig. 4e), where a first growth phase takes place between 0 and 20 h and a second growth phase between 20 and 45 h. The phenomenon with two growth phases is characteristic for diauxic growth behavior as described in various literatures on the topic [18, 28, 37]. The hydrogen productivity profile, both in Cases 3 and 4, is a bit delayed in the model (Figs. 3g, 4g). This could be due to a slight underestimation of the \( k_{\text{L}} a_{{{\text{H}}_{2} }} \) value. The benchmark \( k_{\text{L}} a_{{{\text{H}}_{2} }} \) value used, from Ljunggren et al. [15], was later on calibrated against experimental data resulting in a higher value (Table 8). Still, the mass transfer seems to be less efficient in the model not being able to fully describe the experimental data. The outcome of this study revealed that in batch mode, C. saccharolyticus ferments (un)defined sugar mixtures via different growth phases in a diauxic-like manner. This behavior could be successfully simulated with a kinetic growth model with substrate-based Monod-type kinetics and enzyme synthesis using Hill kinetics together with cybernetic variables to control the upregulation of the enzyme. The model was able to predict the behavior of growth on sugar mixtures both in a defined medium and in wheat straw hydrolysate medium. The model supported the following sequence: xylose is the preferred substrate, but glucose is taken up simultaneously, possibly with the same transporter. After xylose is depleted, glucose is further taken up with a newly induced transporter system, leading to a second hydrogen productivity peak. We further conjecture that this diauxic-like pattern might appear in defined media not containing complex nutrient mixtures, such as yeast extract, as the latter might reduce the edge of the transition point from dominant xylose uptake to dominant glucose uptake by C. saccharolyticus. Future studies should aim at investigating how the various uptake mechanisms in C. saccharolyticus act and contribute to the phenomena described in this study. In addition, a further developed model, verifying the values of several kinetic parameters, including separate maximal uptake rates for the different sugars in the sugar mixture as well as inhibition functions, would improve the applicability of this model for industrial processes. International Energy Agency. Renewables information 2017: overview. http://www.iea.org/publications/freepublications/publication/RenewablesInformation2017Overview.pdf. Accessed 10 Jan 2018. United Nations. Adoption of the Paris Agreement. 2015. http://unfccc.int/resource/docs/2015/cop21/eng/l09r01.pdf. Accessed 31 May 2017. Pawar SS, van Niel EWJ. Thermophilic biohydrogen production: how far are we? Appl Microbiol Biotechnol. 2013;97(18):7999–8009. https://doi.org/10.1007/s00253-013-5141-1. Press RJ, Santhanam KSV, Miri MJ, Bailey AV, Takacs GA. Introduction to hydrogen technology. 1st ed. Hoboken: Wiley; 2008. van Niel EWJ. Biological processes for hydrogen production. In: Hatti-Kaul R, Mamo G, Mattiasson B, editors. Anaerobes in biotechnology. Berlin: Springer International Publishing; 2016. p. 155–93. Das D, Veziroglu TN. Advances in biological hydrogen production processes. Int J Hydrogen Energy. 2008;33(21):6046–57. https://doi.org/10.1016/j.ijhydene.2008.07.098. Claassen PAM, van Lier JB, Contreras AML, van Niel EWJ, Sijtsma L, Stams AJM, de Vries SS, Weusthuis RA. Utilisation of biomass for the supply of energy carriers. Appl Microbiol Biotechnol. 1999;52(6):741–55. https://doi.org/10.1007/s002530051586. Hamelinck CN, van Hooijdonk G, Faaij APC. Ethanol from lignocellulosic biomass: techno-economic performance in short-, middle- and long-term. Biomass Bioenergy. 2005;28(4):384–410. https://doi.org/10.1016/j.biombioe.2004.09.002. Kengen SWM, Goorissen HP, Verhaart M, Stams AJM, van Niel EWJ, Claassen PAM. Biological hydrogen production by anaerobic microorganisms. In: Soetaert W, Vandamme EJ, editors. Biofuels. Chichester: Wiley; 2009. p. 197–221. Willquist K, Zeidan AA, van Niel EWJ. Physiological characteristics of the extreme thermophile Caldicellulosiruptor saccharolyticus: an efficient hydrogen cell factory. Microb Cell Fact. 2010;9:89. https://doi.org/10.1186/1475-2859-9-89. Thauer RK, Jungermann K, Decker K. Energy conservation in chemotrophic anaerobic bacteria. Bacteriol Rev. 1977;41(1):100–80. Rainey FA, Donnison AM, Janssen PH, Saul D, Rodrigo A, Bergquist PL, et al. Description of Caldicellulosiruptor saccharolyticus gen. nov., sp. nov: an obligately anaerobic, extremely thermophilic, cellulolytic bacterium. FEMS Microbiol Lett. 1994;120(3):263–6. https://doi.org/10.1111/j.1574-6968.1994.tb07043.x. van de Werken HJG, Verhaart MRA, VanFossen AL, Willquist K, Lewis DL, Nichols JD, Goorissen HP, Mongodin EF, Nelson KE, van Niel EWJ, et al. Hydrogenomics of the extremely thermophilic bacterium Caldicellulosiruptor saccharolyticus. Appl Environ Microbiol. 2008;74(21):6720–9. https://doi.org/10.1128/AEM.00968-08. VanFossen AL, Verhaart MRA, Kengen SMW, Kelly RM. Carbohydrate utilization patterns for the extremely thermophilic bacterium Caldicellulosiruptor saccharolyticus reveal broad growth substrate preferences. Appl Environ Microbiol. 2009;75(24):7718–24. https://doi.org/10.1128/AEM.01959-09. Ljunggren M, Willquist K, Zacchi G, van Niel EW. A kinetic model for quantitative evaluation of the effect of hydrogen and osmolarity on hydrogen production by Caldicellulosiruptor saccharolyticus. Biotechnol Biofuels. 2011;4:31. https://doi.org/10.1186/1754-6834-4-31. Auria R, Boileau C, Davidson S, Casalot L, Christen P, Liebgott PP, Combet-Blanc Y. Hydrogen production by the hyperthermophilic bacterium Thermotoga maritima Part II: modeling and experimental approaches for hydrogen production. Biotechnol Biofuels. 2016;9:268. https://doi.org/10.1186/s13068-016-0681-0. Monod J. Recherches sur la croissance des cultures bactériennes. Ph.D. thesis, Université de Paris, Hermann, Paris. 1941. Kompala DS, Ramkrishna D, Jansen NB, Tsao GT. Investigation of bacterial growth on mixed substrates: experimental evaluation of cybernetic models. Biotechnol Bioeng. 1986;28:1044–55. https://doi.org/10.1002/bit.260280715. Boianelli A, Bidossi A, Gualdi L, Mulas L, Mocenni C, Pozzi G, Vicino A, Oggioni MR. A non-linear deterministic model for regulation of diauxic lag on cellobiose by the pneumococcal multidomain transcriptional regulator CelR. PLoS ONE. 2012;7:10. https://doi.org/10.1371/journal.pone.0047393. Willquist K, van Niel EWJ. Growth and hydrogen production characteristics of Caldicellulosiruptor saccharolyticus on chemically defined minimal media. Int J Hydrogen Energy. 2012;37(6):4925–9. https://doi.org/10.1016/j.ijhydene.2011.12.055. Zeidan AA, van Niel EWJ. A quantitative analysis of hydrogen production efficiency of the extreme thermophile Caldicellulosiruptor owensensis OLT. Int J Hydrogen Energy. 2010;35(3):1128–37. https://doi.org/10.1016/j.ijhydene.2009.11.082. Batstone DJ, Keller J, Angelidaki I, Kalyuzhnyi SV, Pavlostathis SG, Rozzi A, Sanders WTM, Siegrist H, Vavilin VA. Anaerobic Digestion Model No. 1 IWA task group for mathematical modelling of anaerobic digestion processes. London: IWA Publishing; 2002. de Vrije T, Mars AE, Budde MA, Lai MH, Dijkema C, de Waard P, Claassen PAM. Glycolytic pathway and hydrogen yield studies of the extreme thermophile Caldicellulosiruptor saccharolyticus. Appl Microbiol Biotechnol. 2007;74(6):1358–67. Swinnen IAM, Bernaerts K, Dens EJJ, Geeraerd AH, Van Impe JF. Predictive modelling of the microbial lag phase: a review. Int J Food Microbiol. 2004;94(2):137–59. https://doi.org/10.1016/j.ijfoodmicro.2004.01.006. Hamby DM. A review of techniques for parameter sensitivity analysis of environmental models. Environ Monit Assess. 1994;32(2):135–54. https://doi.org/10.1007/bf00547132. Barrera EL, Spanjers H, Solon K, Amerlinck Y, Nopens I, Dewulf J. Modeling the anaerobic digestion of cane-molasses vinasse: extension of the Anaerobic Digestion Model No. 1 (ADM1) with sulfate reduction for a very high strength and sulfate rich wastewater. Water Res. 2015;71:42–54. https://doi.org/10.1016/j.watres.2014.12.026. Pawar SS, Nkemka VN, Zeidan AA, Murto M, van Niel EWJ. Biohydrogen production from wheat straw hydrolysate using Caldicellulosiruptor saccharolyticus followed by biogas production in a two-step uncoupled process. Int J Hydrogen Energy. 2013;38(22):9121–30. https://doi.org/10.1016/j.ijhydene.2013.05.075. Roop JI, Chang KC, Brem RB. Polygenic evolution of a sugar specialization trade-off in yeast. Nature. 2016;530:336–49. https://doi.org/10.1038/nature16938. Wang J, Atolia E, Hua B, Savir Y, Escalante-Chong R, Springer M. Natural variation in preparation for nutrient depletion reveals a cost–benefit tradeoff. PLoS Biol. 2015;13:1. https://doi.org/10.1371/journal.pbio.1002041. Kremling A, Geiselmann J, Ropers D, de Jong H. Understanding carbon catabolite repression in Escherichia coli using quantitative models. Trends Microbiol. 2015;23(2):99–109. https://doi.org/10.1016/j.tim.2014.11.002. Deutscher J. The mechanisms of carbon catabolite repression in bacteria. Curr Opin Microbiol. 2008;11(2):87–93. https://doi.org/10.1016/j.mib.2008.02.007. Chu DF. In silico evolution of diauxic growth. BMC Evol Biol. 2015;15:211. https://doi.org/10.1186/s12862-015-0492-0. Görke B, Stülke J. Carbon catabolite repression in bacteria: many ways to make the most out of nutrients. Nat Rev Microbiol. 2008;6(8):613–24. https://doi.org/10.1038/nrmicro1932. Shen N, Zhang F, Song XN, Wang YS, Zeng RJ. Why is the ratio of H2/acetate over 2 in glucose fermentation by Caldicellulosiruptor saccharolyticus? Int J Hydrogen Energy. 2013;38(26):11241–7. https://doi.org/10.1016/j.ijhydene.2013.06.091. Tartakovsky B, Mu SJ, Zeng Y, Lou SJ, Guiot SR, Wu P. Anaerobic Digestion Model No. 1-based distributed parameter model of an anaerobic reactor: II. Model validation. Bioresour Technol. 2008;99(9):3676–84. https://doi.org/10.1016/j.biortech.2007.07.061. Willquist K. Physiology of Caldicellulosiruptor saccharolyticus: a hydrogen cell factory. Ph.D. thesis, Lund University, Sweden. 2010. Song HS, Liu C. Dynamic metabolic modeling of denitrifying bacterial growth: the cybernetic approach. Ind Eng Chem Res. 2015;54(42):10221–7. https://doi.org/10.1021/acs.iecr.5b01615. JB: data analysis, calculations, model development, and manuscript writing. EB: planning and execution of the fermentation experiments, HPLC and GC analyses, and manuscript writing. EvN: supervision of fermentation, analysis, and manuscript writing. KW: supervision of modeling, analysis and fermentation, and manuscript writing. All authors contributed to revision of the manuscript and approved the text, figures, and tables for submission. All authors read and approved the final manuscript. The authors acknowledge the Swedish Energy Agency for the financial support of this work under "Metanova" Project No. 31090-2. All data generated or analyzed during this study are included in this article. If additional information is needed, please contact the corresponding author. The study was funded by the Swedish Energy Agency whom did not participate in the execution of the study or in the manuscript writing. Department of Energy and Circular Economy, RISE Research Institutes of Sweden, PO Box 857, 501 15, Borås, Sweden Johanna Björkmalm & Karin Willquist Division of Applied Microbiology, Lund University, PO Box 124, 221 00, Lund, Sweden , Eoin Byrne & Ed W. J. van Niel Search for Johanna Björkmalm in: Search for Eoin Byrne in: Search for Ed W. J. van Niel in: Search for Karin Willquist in: Correspondence to Johanna Björkmalm. Björkmalm, J., Byrne, E., van Niel, E.W.J. et al. A non-linear model of hydrogen production by Caldicellulosiruptor saccharolyticus for diauxic-like consumption of lignocellulosic sugar mixtures. Biotechnol Biofuels 11, 175 (2018) doi:10.1186/s13068-018-1171-3 Caldicellulosiruptor saccharolyticus Kinetic growth model Glucose uptake Xylose uptake Diauxic
CommonCrawl
Brain Topography Robust EEG/MEG Based Functional Connectivity with the Envelope of the Imaginary Coherence: Sensor Space Analysis Jose M. Sanchez Bornot KongFatt Wong-Lin Alwani Liyana Ahmad Girijesh Prasad 2k Downloads The brain's functional connectivity (FC) estimated at sensor level from electromagnetic (EEG/MEG) signals can provide quick and useful information towards understanding cognition and brain disorders. Volume conduction (VC) is a fundamental issue in FC analysis due to the effects of instantaneous correlations. FC methods based on the imaginary part of the coherence (iCOH) of any two signals are readily robust to VC effects, but neglecting the real part of the coherence leads to negligible FC when the processes are truly connected but with zero or π-phase (modulus 2π) interaction. We ameliorate this issue by proposing a novel method that implements an envelope of the imaginary coherence (EIC) to approximate the coherence estimate of supposedly active underlying sources. We compare EIC with state-of-the-art FC measures that included lagged coherence, iCOH, phase lag index (PLI) and weighted PLI (wPLI), using bivariate autoregressive and stochastic neural mass models. Additionally, we create realistic simulations where three and five regions were mapped on a template cortical surface and synthetic MEG signals were obtained after computing the electromagnetic leadfield. With this simulation and comparison study, we also demonstrate the feasibility of sensor FC analysis using receiver operating curve analysis whilst varying the signal's noise level. However, these results should be interpreted with caution given the known limitations of the sensor-based FC approach. Overall, we found that EIC and iCOH demonstrate superior results with most accurate FC maps. As they complement each other in different scenarios, that will be important to study normal and diseased brain activity. Imaginary coherence Functional and effective connectivity Electroencephalography and magnetoencephalography Volume conduction Semi-realistic simulations Hilbert transform Handling Editor: Fabrice Wendling. The online version of this article ( https://doi.org/10.1007/s10548-018-0640-0) contains supplementary material, which is available to authorized users. Communication of information across the cortex, vital for cognitive function, has been suggested to involve neural dynamic oscillations and related (de)synchronization activity (Buzsáki and Draguhn 2004; Makeig et al. 2004; Singer 1999; Tallon-Baudry and Bertrand 1999). The basis of continuously changing oscillatory behavior can be found in the complex nonlinear and unpredictable interactions among neural populations, whose patterns are still unable to be completely disclosed with modern neuroimaging techniques. A successful statistical approach should be simple and efficient to deal with massive data analysis and for allowing clear interpretation of the results. Functional connectivity (FC) analysis in the frequency domain, based on coherence methods, has been proposed to efficiently elucidate such networks of information transfer (Fries 2005; Jensen et al. 2007; Nunez et al. 1997; Rodriguez et al. 1999; Schnitzler and Gross 2005; Shaw 1984; Simoes et al. 2003; Stam and van Straaten 2012; Wheaton et al. 2005). The implicit use of frequency based analytical tools such as wavelets and Fourier transform (FT) has an important advantage of circumventing issues that arise from the nonlinearity and non-stationarity of the underlying neural dynamics (Bendat and Piersol 2011; Grandchamp and Delorme 2011). Particularly, the computational efficiency of these techniques and their simplicity, allows the analysis of a large number of regions of interest (ROIs) and clear-cut interpretation. Due to superior time resolution, magnetoencephalography/electroencephalography (M/EEG) is often used to study brain dynamics (Lopes da Silva 2013; Palva and Palva 2012). However, the mixing and field spreading of the local field potentials, eventually reflected at the sensor level, pose a serious challenge for the connectivity analysis. One possible solution is to first solve the inverse problem with one of the well-established methods (Friston et al. 2008; Grave de Peralta; Menendez et al. 2001; Gross et al. 2001; Hämäläinen and Ilmoniemi 1994; Huang et al. 2014; Pascual-Marqui 2007; Van Veen et al. 1997) and then assess FC from the estimated source activities. Although Schoffelen and Gross (2009) suggested that FC must be analyzed at source instead of sensor space, their work also warned against excessive optimism mainly due to volume conduction (VC) effects that are still present in the estimated source activities. Another important limitation of the latter approach is the lack of realism of currently popularly used forward models which could be addressed by using more realistic but complex and time consuming finite element methods (Cho et al. 2015; Dannhauer et al. 2011; Lanfer et al. 2012a, b; Vorwerk et al. 2012, 2014). Other important cause of bias is the presence of deep sources that are not well estimated, and particularly may lead to the estimation of a nearby related superficial source or even two or more superficial sources with mixed estimated dynamics that deceitfully provide a better fit of the observed M/EEG signals. Obviously, the spread of estimated source fields, biased estimation of the number of sources, localization errors and poor separation of mixed signals will lead to false connectivity inferences. FC analyses in sensor space are important for quick analysis of brain functions, i.e. without resorting to more complex source based analyses. They have been robustly addressed by Nolte et al. (2004) who proposed the imaginary part of the coherence (iCOH) method as an essential technique to circumvent the VC effects for FC estimation. They demonstrated an improved FC estimation using iCOH measure in comparison to coherence analysis, and showed transient interactions between left–right motor cortical signals as a function of time and frequency in a real dataset. However, due to its exclusive dependency on the iCOH, FC estimate based on iCOH becomes negligible in some situations even in the presence of a significant true interaction, e.g. the phase difference between two signals is near zero or π (modulus 2π). Later improvements on this limitation were achieved by proposing the phase lag index (PLI) (Stam et al. 2007) and the weighted PLI (wPLI) (Vinck et al. 2011), demonstrated by simulations based on the Kuramoto-model as well as with real data. As further evidence of iCOH based techniques' effectiveness, Haufe et al. (2013) explored iCOH and phase slope index (PSI) (Nolte et al. 2008), together with multivariate Granger causality (Granger-MVAR) (Granger 1969) and partial directed coherence (PDC) approaches (Baccalá and Sameshima 2001) in sensor and source spaces using semi-realistic brain simulated data based on only two interacting sources (acting as ground truth). They found that Granger-MVAR and PDC have serious problems with VC in sensor and source spaces. Additionally, they showed that methods based on the imaginary part of the cross-spectral or complex coherence were able to better identify the true interactions. In a more recent simulation study, Haufe and Ewald (2016) proposed a threefold procedure to study FC, which consisted of: (1) estimating source activity with a reliable M/EEG inverse solver when signal-to-noise (SNR) ratio is sufficiently high for the activity of interest; (2) testing for significant interactions using iCOH while comparing against a baseline estimate; and (3) assessing the connectivity direction using PSI. They were able to show that their approach can partially recover active regions, identify a possible interaction and determine the lagging region. However, their simulations used only two linearly interacting regions and it is unclear whether the same procedure can be successfully applied to more realistic nonlinear neural models, and/or with the use of a higher number of ROIs and their interactions. From the above, it is clear that iCOH-derived techniques are useful for FC analysis using simulated, real and clinical datasets (see also Ewald et al. 2012; Guggisberg et al. 2008; Hardmeier et al. 2014; Olde Dubbelink et al. 2014; Polanía et al. 2012; Stam et al. 2006, 2007, 2008; Vinck et al. 2011). But despite current advances, these methods are still very dependent on the imaginary part of coherence (or cross-spectral), hence limiting their potential in FC analysis. In this work, we address the "imaginary-part" limitation by proposing a new iCOH-derived measure: the envelope of the imaginary coherence (EIC) operator, defined here as the absolute value of the analytical signal estimated from the iCOH measure when the latter is regarded as a function in the frequency domain. We will empirically demonstrate that this operator is able to compensate for the missing real part and can readily approximate the coherence value between possibly interacting underlying sources. We will also provide arguments against using a conventional normalization procedure for the original estimation of the iCOH method while proposing a different normalization approach. In a simulation study considering two possibly interacting sources, we will compare our proposed EIC method with state-of-the-art coherence based approaches: classical coherence (COH), phase lock value (PLV) (Lachaux et al. 1999), iCOH (Nolte et al. 2004), PLI (Stam et al. 2007) and wPLI (Vinck et al. 2011). A surrogate-based statistical procedure proposed by Lachaux et al. (1999) will be used to assess significant FC between two sensors which are assumed to be located nearby the underlying active sources. Furthermore, based on synthetically generated M/EEG signals which are more realistic and complex than in previous simulation studies, we compare EIC against other iCOH-derived techniques using receiver operating curves (ROC) analysis, where the latter was based on ROIs defined over the sensor space. This is done to avoid the selection of potential biased thresholds for each FC measure, separately, and to introduce a novel procedure to evaluate the feasibility of sensor-based FC analysis. Specifically, we will present simulations of 3 and 5 interacting ROIs with neural dynamics described by multivariate autoregressive (MVAR) model and a system of stochastic delay differential equations (SDDEs), projected onto 102 MEG channels to compute sensor-based FC measures. Throughout, we show that EIC is more robust than other methods in terms of found true FC and reduced spurious results, i.e. EIC is robust to VC as other iCOH based measures but distinctly allows to infer significant FC even in the presence of zero or \(\pi\)-phase interactions. We also showed that the classical iCOH method (Nolte et al. 2004) can accurately detect complex FC interactions despite its limitations, thus we recommend to use EIC as a complement to iCOH in practical analysis. Overall, our work has shed light on the usefulness and limitations of iCOH-derived techniques for analysis of M/EEG data and the feasibility of analysis of FC in sensor space. In this study, we limit ourselves to the study of brain regional interactions as reflected at sensor space; the estimation of these interactions in source space with iCOH methods will be discussed in future work, though interested readers can consult the vast existing literature (e.g. Brookes et al. 2014; Colclough et al. 2015; Haufe et al. 2013; Haufe and Ewald 2016; O'Neill et al. 2015; Schoffelen and Gross 2009; Siems et al. 2016; Van de Steen et al. 2016). In Fig. 1 we illustrate an example of the generation of M/EEG signals from active brain sources, which is used to introduce the FC estimation in sensor space with iCOH-derived techniques, and illustrates how the VC effects in sensor space are directly related to the field spread of local active underlying sources. Specifically, two interacting sources are simulated in a sagittal view of the brain together with two nearby sensors located over the scalp in the same projection plane. The interactions between the sources as well as local leadfield effect over the sensors are indicated with continuous and dashed arrows, respectively. Given the sensor signals, the complete challenge is to make inferences about active source locations, their temporal signatures and identifying possible interactions among the sources. However, in this work, we shall focus only on the latter problem. Schematic to demonstrate the M/EEG signal generation using a forward problem restricted to two possibly interacting sources (dipoles) and a pair of nearby sensors. Signals \(x(t)\) and \(y(t)\) represent source activity, whereas \(u(t)\) and \(v(t)\) represent sensor recordings. Continuous and dashed arrows represent interaction from source \(y\) to \(x\) and influence of source dipoles over sensor recorded activity, respectively In this example, the source dynamics (\(x\) and \(y\)) can be represented using bivariate autoregressive model or neural mass model (NMM) dynamics, while their influences on the sensor measurements (\(u\) and \(v\)) are represented as, $$u={a_1}x+{b_1}y+{\varepsilon _u};~{\varepsilon _u}\sim N(0,\sigma _{u}^{2}),$$ $$v={a_2}x+{b_2}y+{\varepsilon _v};~{\varepsilon _v}\sim N\left( {0,\sigma _{v}^{2}} \right),$$ which correspond to a local leadfield model, where \({a_1},~{b_1},~{a_2},~{b_2}\) represent the mixing coefficients, and \({\varepsilon _u}\) and \({\varepsilon _v}\) are white Gaussian noise terms. The expected cross-covariance and cross-spectral estimate of the sensor signals are, $$\begin{aligned} {R_{uv}}\left( \tau \right) &={\rm E}\left[ {u\left( t \right)v\left( {t+\tau } \right)} \right] \\ &={a_1}{a_2}{R_{xx}}\left( \tau \right)+{a_1}{b_2}{R_{xy}}\left( \tau \right)+{a_2}{b_1}{R_{yx}}\left( \tau \right)+{b_1}{b_2}{R_{yy}}\left( \tau \right) \\ \end{aligned}$$ $$\begin{array}{*{20}{c}} {{S_{uv}}\left( f \right)}&=&{{a_1}{a_2}{S_{xx}}\left( f \right)+{a_1}{b_2}{S_{xy}}\left( f \right)+{a_2}{b_1}S_{{xy}}^{*}\left( f \right)+{b_1}{b_2}{S_{yy}}(f)} \end{array}$$ By using the notation \({S_{xy}}\left( f \right)=\Re \left\{ {{S_{xy}}\left( f \right)} \right\}+j\Im \left\{ {{S_{xy}}\left( f \right)} \right\},\) we obtain (Bendat and Piersol 2011): $${S_{uv}}\left( f \right)={a_1}{a_2}{S_{xx}}\left( f \right)+{b_1}{b_2}{S_{yy}}\left( f \right)+\left( {{a_1}{b_2}+{a_2}{b_1}} \right)\Re \left\{ {{S_{xy}}\left( f \right)} \right\}+j\left( {{a_1}{b_2} - {a_2}{b_1}} \right)\Im \left\{ {{S_{xy}}\left( f \right)} \right\}.$$ As can be observed in this last derivation, the main VC effect is the contamination of the real-part (\(\Re\)) of \({S_{uv}}(f)\) with auto-spectral terms, whereas the imaginary-part (\(\Im\)) of \({S_{uv}}(f)\) (the last term on the right-hand side of the equation) is exactly a scaled version of the imaginary-part of \({S_{xy}}(f)\). That means that we can recover very well the imaginary part of unknown interacting processes if we are able to obtain measurements from nearby sensors. Otherwise, the real part is a combination of terms which include the real-part of interacting underlying sources \(\Re \{ {S_{xy}}(f)\}\) but this term cannot be easily extracted. The imaginary-part of \({S_{uv}}(f)\) hardly goes to zero for all frequencies, unless \(\Im \{ {S_{xy}}(f)\} =0\) for all frequency values, or the determinant of the local leadfield coefficients (\({a_1}{b_2} - {a_2}{b_1}\)) is zero, both of which are rare in practice; although the former can be the case for oscillatory signals with very narrow bandwidth. Thus, the imaginary-part as measured from the harmonic analysis of the interaction of the sensor dynamics, can be used to obtain a measure that captures well the interactions of underlying sources, a fact that has been exploited by methods such as iCOH, PLI and wPLI (Nolte et al. 2004; Stam et al. 2007; Vinck et al. 2011). More generally, the sample estimate of the cross-spectral measure obtained from signals \({u_n}(t)\) and \({v_n}(t)\), collected across epochs \(n=1, \ldots ,N\), is $${S_{uv}}=\frac{1}{N}\mathop \sum \limits_{{n=1}}^{N} {U_n}(f)V_{n}^{*}(f)$$ where \({U_n}(f)\) and \({V_n}(f)\) are the corresponding FT of signals \({u_n}(t)\) and \({v_n}(t)\) for each epoch. From here, the complex-coherence is computed as, $${C_{uv}}\left( f \right)=\frac{{{S_{uv}}\left( f \right)}}{{\sqrt {{S_{uu}}\left( f \right){S_{vv}}\left( f \right)} }}=\Re \left\{ {{C_{uv}}\left( f \right)} \right\}+j\Im \{ {C_{uv}}\left( f \right)\} ,$$ which allows to obtain the coherence estimator \(\left| {{C_{uv}}\left( f \right)} \right|\). In the above example, with the interactions depicted in Fig. 1 and in Eqs. (1) and (2), Eq. (7) becomes $${C_{uv}}\left( f \right)=\frac{{{a_1}{a_2}{S_{xx}}\left( f \right)+{b_1}{b_2}{S_{yy}}\left( f \right)+\left( {{a_1}{b_2}+{a_2}{b_1}} \right)\Re \left\{ {{S_{xy}}\left( f \right)} \right\}}}{{\sqrt {{S_{uu}}\left( f \right){S_{vv}}\left( f \right)} }}+j\frac{{\left( {{a_1}{b_2} - {a_2}{b_1}} \right)\Im \left\{ {{S_{xy}}\left( f \right)} \right\}}}{{\sqrt {{S_{uu}}\left( f \right){S_{vv}}\left( f \right)} }},$$ whereby using the FT (\(\mathop{\longrightarrow}^{\mathcal{F}}\)) representations for \({x_n}\left( t \right)\) and \({y_n}\left( t \right)\), $${x_n}\left( t \right)\mathop{\longrightarrow}^{\mathcal{F}}{X_n}\left( f \right)={R_n}{e^{j{\varphi _n}}},~{y_n}\left( t \right)\mathop{\longrightarrow}^{\mathcal{F}}{Y_n}\left( f \right)={r_n}{e^{j{\theta _n}}},$$ we obtain the individual expressions for the auto-spectral and cross-spectral terms: $${S_{xx}}\left( f \right)=\frac{1}{N}\mathop \sum \limits_{n} R_{n}^{2},{S_{yy}}\left( f \right)=\frac{1}{N}\mathop \sum \limits_{n} r_{n}^{2},{S_{xy}}\left( f \right)=\frac{1}{N}\mathop \sum \limits_{n} {R_n}{r_n}{e^{j({\varphi _n} - {\theta _n})}},$$ $${S_{uu}}\left( f \right)=\frac{{a_{1}^{2}}}{N}\mathop \sum \limits_{n} R_{n}^{2}+\frac{{b_{1}^{2}}}{N}\mathop \sum \limits_{n} r_{n}^{2}+\frac{{2{a_1}{b_1}}}{N}\mathop \sum \limits_{n} {R_n}{r_n}\cos \left( {{\varphi _n} - {\theta _n}} \right)+\widehat {\sigma }_{u}^{2},$$ $${S_{vv}}\left( f \right)=\frac{{a_{2}^{2}}}{N}\mathop \sum \limits_{n} R_{n}^{2}+\frac{{b_{2}^{2}}}{N}\mathop \sum \limits_{n} r_{n}^{2}+\frac{{2{a_2}{b_2}}}{N}\mathop \sum \limits_{n} {R_n}{r_n}{\text{cos}}\left( {{\varphi _n} - {\theta _n}} \right)+\widehat {\sigma }_{v}^{2}.$$ One important observation from these derivations is that the denominator used for computing the complex coherence value, i.e. \(\sqrt {{S_{uu}}\left( f \right){S_{vv}}\left( f \right)}\), is contaminated by a weighted average of the cosine of the phase differences of interacting processes across trials, and thus the denominator magnitude fluctuates with dependence of the particular value of the phase difference. If we estimate the iCOH measure directly as the imaginary part of the complex coherence as originally stated (Nolte et al. 2004), then iCOH will lose its direct relationship to the corresponding imaginary-part of possibly interacting underlying sources and can potentially become less stable. Therefore, it may be preferable to obtain iCOH directly from the cross-spectra as \({\rm E}[\Im \{ U\left( f \right){V^*}(f)\} ]\) (without normalization) or using a different normalization factor. Notice that a normalization is recommended in order to make fair comparisons across frequencies or among groups/conditions and to guarantee that values are in a controlled range, i.e. \(\left[ { - 1,1} \right]\) or \(\left[ {0,1} \right]\). Therefore, we introduce a more convenient normalization for iCOH in "Coherence and Imaginary Coherence Based Measures" section, which is used in the derivation of the new proposed method. In the discussion so far, we have not mentioned a critical problem that is still present and is usually ignored in the literature; namely, the rejection of the real-part in current state-of-the-art iCOH-derived techniques causing the loss of information that is important for producing better FC maps. A direct consequence of this omission is that these measures show negligible values when truly connected processes have a zero or \(\pi\)-phase interaction. As a main objective in our work, we propose here a new method derived from the imaginary part that allows us to approximate and consider the missing real-part of the coherence, and therefore is sensitive to these interactions whilst being robust to VC. Coherence and Imaginary Coherence Based Measures The iCOH measure can be obtained directly either from the imaginary part of the complex coherence [Eq. (13)] or using a more appropriate normalization term as shown below [Eq. (14)]: $$iCO{H_1}\left( f \right)=\Im \{ {C_{uv}}(f)\} ,$$ $$iCO{H_2}={\rm E}\left[ {\Im \{ U\left( f \right){V^*}(f)\} } \right]/{\rm E}\left[ {\left| {\mathcal{H}\left( {\Im \left\{ {U\left( f \right){V^*}\left( f \right)} \right\}} \right)} \right|} \right].$$ The modified iCOH version introduced in Eq. (14) is normalized conveniently using a denominator estimated by using the Hilbert's transform (HT). Here, the function \(\mathcal{H}( \cdot )\) produces the analytical signal from the cross-spectral imaginary values, while the expected value of its magnitude is taken to produce a robust normalization factor. Notice that the HT of a cosine produces a sine and vice versa. Thus, our aim with this operation is to (approximately) recover the missing real-part content of possible underlying interacting sources when only the non-contaminated imaginary-part is used for the reasons discussed above. The theoretical proof on the effectiveness of this operation to recover the ignored real-part information is beyond the scope of this paper. However, we will empirically show in the next section the feasibility of this approach. Within the variety of coherence measures, another useful technique that is commonly used in the literature is the PLV (Lachaux et al. 1999): $$PLV\left( f \right)=\left| {{\rm E}\left[ {{e^{j(Phase\{ U\left( f \right)\} - Phase\left\{ {V\left( f \right)} \right\})}}} \right]} \right|,$$ which assumes that the signal amplitude and phase are statistically independent and uses only the phase content for estimating a possible interaction. We have used this measure in our comparison study to show that it is similarly affected by VC as the coherence estimator. The set of state-of-the-art coherence based FC methods considered in this study is completed with the use of the PLI (Stam et al. 2007), wPLI (Vinck et al. 2011), and lagged coherence (lCOH) (Pascual-Marqui et al. 2011): $$PLI\left( f \right)=\left| {E\left[ {\operatorname{sgn} \left( {Phase\{ U\left( f \right)\} - Phase\left\{ {V\left( f \right)} \right\}} \right)} \right]} \right|,$$ $$wPLI\left( f \right)=\left| {{\rm E}\left[ {\Im \{ U\left( f \right){V^*}(f)\} } \right]} \right|{\text{/}}{\rm E}\left[ {\left| {\Im \left\{ {U\left( f \right){V^*}\left( f \right)} \right\}} \right|} \right],$$ $$lCOH\left( f \right)=\Im {\left\{ {{C_{uv}}\left( f \right)} \right\}^2}{\text{/}}\left( {1 - \Re {{\left\{ {{C_{uv}}(f)} \right\}}^2}} \right).$$ The PLI is obtained from the expected value of the signum of the imaginary part, \({\rm E}\left[ {\operatorname{sgn} \left( {\Im \left\{ {{U_n}\left( f \right)V_{n}^{*}(f)} \right\}} \right)} \right]\), being equivalent to \(\pm E\left[ {\operatorname{sgn} \left( {Phase\{ X\left( f \right)\} - Phase\left\{ {Y\left( f \right)} \right\}} \right)} \right]\), with a sign indeterminacy (for the example illustrated in Fig. 1, this indeterminacy refers to the sign of \({a_1}{b_2} - {a_2}{b_1}\)). Otherwise, wPLI is its weighted version in order to achieve more stability. Finally, we have included the lCOH for completeness in our study given its close similarity to iCOH measure, but also to explore the effect of using a different normalization that can either improve the sensitivity to detect FC or deteriorate performance in different VC or noise level scenarios. Envelope of the Imaginary Coherence (EIC) Operator In order to obtain our proposed EIC operator, we compute the envelope of the iCOH function, \(z(f)\), as the amplitude of the analytical signal \(h\left( f \right)=z\left( f \right)+j\overline {z} (f)\), where \(\overline {z} (f)\) is obtained by using the HT function (Zygmund 2002): $$\overline {z} \left( f \right)= - \frac{1}{\pi }\mathop {\lim }\limits_{{\varepsilon \to 0}} \int\limits_{\varepsilon }^{{+\infty }} {\frac{{z\left( {f+\omega } \right) - z(f - \omega )}}{\omega }d\omega }$$ The HT is appropriate for constructing the envelope of narrow band signals in time domain. Wavelets analysis has been used in more general cases but both techniques have been used after a band-pass filtering to extract the oscillatory components within the frequency of interest in the signal. These techniques are applied indistinctively in signal processing and particularly for time–frequency decomposition analysis and there is no evidence to state the superiority of one approach over the other (Grandchamp and Delorme 2011). Our focus here is to recover the local envelope of the signal represented by the iCOH measure (in frequency domain), in an attempt to partially recover and incorporate the information contained in its accompanying real part, as we demonstrate next. Figure 2 illustrates the EIC idea with a simple example. Suppose a 40 Hz sinusoidal function is weighted by a Gaussian belt (envelope curve) with mean of 0.5 s and standard deviation of 0.02 s (Fig. 2a). The envelope curve can be recovered exactly if the HT is used in the time domain to estimate the analytical signal (Fig. 2b). But instead, we may proceed to analyze the signal in the frequency domain using the FT and compute the envelope of the imaginary (EI) part as the absolute value of the analytical signal obtained by applying HT only to the imaginary part of the FT coefficients (see Fig. 2c). As shown in Fig. 2d, the EI curve quite closely resembles the magnitude spectral density (MSD) of the original signal even when the EI curve is computed using only the imaginary-part, which shows evidence of the practicability of using HT for recovering information that is lost when the real part is ignored like in the example (Fig. 2c). The case concerned in our study is similar to this simple example in relation to the imaginary-part of the coherence or cross-spectra. Following a similar reasoning, we heuristically support our case that EIC can recover missing information and thus provide more valuable content in comparison to other related iCOH-derived techniques. a One second segment of a time-limited signal x(t) which is obtained from an original 40 Hz sinusoidal by weighting with a Gaussian distribution function with mean of 0.5 s and standard deviation of 0.02 s. The Gaussian curve can be regarded as the envelope of the time-limited curve. b The envelope can be recovered from the time-limited signal by computing the absolute value of the analytical signal of x(t). c In the frequency domain, the FT of the signal, x(f), is represented by its real and imaginary parts, together with an EI part, which is obtained from the absolute value of the analytical signal of the imaginary part. d The MSD of x(f) is represented together with the positive part of the EI curve. Notice that both have similar characteristics and present a peak about 40 Hz In this example, the EI curve shows heavier tails compared to the MSD due to some border effects in the estimation of the analytical signal, but the important point is that the peak of both functions occurs nearby the same point. In the Supplementary Material, further evidences are provided to show the robustness of the EIC operator (Figs. S1 and S2). In particular, in Fig. S2, using the same signal as in Fig. 2, we demonstrate that if this type of envelope is computed only from the real part (blue curve in Fig. 2c), then the result is similar and we are again able to readily recover the missing information. Therefore, with respect to any frequency of interest, we can be confident that EIC can recover information about the FC strength that is lost when the real part is ignored, while being relatively robust with respect to the varying local phase and the waxing-waning behavior of the imaginary-part in the frequency domain. We now introduce two versions of the EIC operator corresponding to each of the discussed versions of iCOH. The first definition (\(EI{C_1}\)) derives directly from the application of HT on the imaginary part of the complex coherence \({C_{uv}}(f)\) which was defined in Eq. (13). This version can present some undesired behaviour as a result of the instability induced by the normalization term in the complex coherence estimation as discussed above. The second, and our preferred, definition (\(EI{C_2}\)), is derived in a similar way but from the new normalized version of the iCOH measure (see Eq. (14) above). The motivation is to compensate for the missing real-part when the imaginary-part is used exclusively. Based on these, two versions of EIC, \(EI{C_1}(f)\) and \(EI{C_2}(f)\), are formulated as follows: $$EI{C_1}\left( f \right)=\left| {\mathcal{H}\left( {\Im \left\{ {{C_{uv}}\left( f \right)} \right\}} \right)} \right|,$$ $$EI{C_2}\left( f \right)=\left| {\mathcal{H}\left( {{\rm E}\left[ {\Im \left\{ {U\left( f \right){V^*}(f)} \right\}} \right]/{\rm E}\left[ {\left| {\mathcal{H}\left( {\Im \left\{ {U\left( f \right){V^*}\left( f \right)} \right\}} \right)} \right|} \right]} \right)} \right|$$ Simulation of Source Activity with Autoregressive and Neural Mass Models To compare the performance of the coherence based measures, we prepared two types of simulations, one consisting of simple (linear) autoregressive model and the other based on more realistic nonlinear neural mass models (NMM) (Jansen and Rit 1995). These models simulate the interaction of activities among sources (e.g. \(x(t)\) and \(y(t)\) represented in Fig. 1), acting as ground truth, while their activities are only observed indirectly (e.g. \(u(t)\) and \(v(t)\) representing either EEG or MEG sensor recordings in Fig. 1). The values for the mixing coefficients are \({a_1}=0.75\), \({b_1}=0.5\), \({a_2}=0.5\) and \({b_2}=0.75\) [see Eqs. (1) and (2)]. Dynamics are generated by considering two different cases: (1) dependency given by influence from, say process \(y(t)\) onto \(x(t)\) in Fig. 1, mediated by a connectivity strength (\({C_{y \to x}} \ne 0\)) and information transmission delay, that can both be varied; and (2) independence of the processes, i.e. obtained by setting \({C_{y \to x}}=0\). To produce stable FC measurements, we simulate 1 s long epochs and 100 trials with same parameter values for each model, but using different noise replications. Although we present in this section a simulation framework for two regions, this can be straightforwardly extended to simulate any number of ROIs. For the dependency case, the generative process for the autoregressive model with two sources is described by: $$\begin{aligned} {x_n}\left( t \right)&=1.5{x_n}\left( {t - 1} \right) - 0.75{x_n}\left( {t - 2} \right)+{C_{y \to x}}{y_n}\left( {t - \delta } \right)+{\varepsilon _x}\left( t \right);~{\varepsilon _x}\sim N(0,\sigma _{x}^{2}), \hfill \\ {y_n}\left( t \right)&=1.5{y_n}\left( {t - 1} \right) - 0.75{y_n}\left( {t - 2} \right)+{\varepsilon _y}\left( t \right);~{\varepsilon _y}\sim N(0,\sigma _{y}^{2}), \hfill \\ \end{aligned}$$ where \(\delta\) represents the transmission delay for \(y \to x\) and \(n=1, \ldots ,N\) indicates the epoch index. In the simulations, the sampling frequency is \({F_S}=250\) Hz such that time step is 4 ms, and the range of communication delay is \(\delta \in \{ 1, \ldots ,12\}\) such that the fastest transmission delay is 4 ms and the slowest is 48 ms, which is within reasonable physiological range (Ringo et al. 1994; Izhikevich and Edelman 2008). The connectivity strength is set as \({C_{y \to x}}=0.5\), and \({\sigma _x}={\sigma _y}=1\) for each simulation. The coefficient values were chosen to produce 20 Hz oscillations. The generative process for the NMM is based on the classic Jansen and Rit (1995) model, but modified with explicit transmission delay for communication between ROIs and a stochastic term. The generating SDDEs system is described by: $$\begin{array}{*{20}{l}} {d{x_1}\left( t \right)={x_4}\left( t \right)~dt} \\ {d{x_2}\left( t \right)={x_5}\left( t \right)~dt} \\ {d{x_3}\left( t \right)={x_6}\left( t \right)~dt} \\ {d{x_4}\left( t \right)=\left[ {Aa~S\left\{ {{x_2}\left( t \right) - {x_3}\left( t \right)} \right\} - 2a{x_4}\left( t \right) - {a^2}{x_1}(t)} \right]~dt} \\ {d{x_5}\left( t \right)=\left[ {Aa\left( {{I_x}+{C_{y \to x}}{y_1}\left( {t - \tau } \right)+{C_2}~S\left\{ {{C_1}{x_1}\left( t \right)} \right\}} \right) - 2a{x_5}\left( t \right) - {a^2}{x_2}(t)} \right]~dt+Aa~d{W_x}(t)} \\ {d{x_6}\left( t \right)=\left[ {Bb\left( {{C_4}~S\left\{ {{C_3}{x_1}\left( t \right)} \right\}} \right) - 2b{x_6}\left( t \right) - {a^2}{x_3}(t)} \right]~dt} \\ {d{y_1}\left( t \right)={y_4}\left( t \right)~dt} \\ {d{y_2}\left( t \right)={y_5}\left( t \right)~dt} \\ {d{y_3}\left( t \right)={y_6}\left( t \right)~dt} \\ {d{y_4}\left( t \right)=\left[ {Aa~S\left\{ {{y_2}\left( t \right) - {y_3}\left( t \right)} \right\} - 2a{y_4}\left( t \right) - {a^2}{y_1}(t)} \right]~dt} \\ {d{y_5}\left( t \right)=\left[ {Aa\left( {{I_y}+{C_2}~S\left\{ {{C_1}{y_1}\left( t \right)} \right\}} \right) - 2a{y_5}\left( t \right) - {a^2}{y_2}(t)} \right]~dt+Aa~d{W_y}(t)} \\ {d{y_6}\left( t \right)=\left[ {Bb\left( {{C_4}~S\left\{ {{C_3}{y_1}\left( t \right)} \right\}} \right) - 2b{y_6}\left( t \right) - {a^2}{y_3}(t)} \right]~dt} \end{array}$$ where \(S\left\{ \upsilon \right\}=2{e_0}/(1+{e^{ - \rho (\upsilon - {\upsilon _0})}})\) is the input–output sigmoid function. We used the same values for neural mass parameters (\(A,a,B,b,{e_0},{\upsilon _0},{C_1},{C_2},{C_3},{C_4}\)) as in Jansen and Rit (1995), but in our case we added Wiener processes \({W_x}(t)\) and \({W_y}(t)\) to the equations to induce stochastic behaviour. We tuned the variances of \({W_x}(t)\) and \({W_y}(t)\) and set the average population transmembrane current \({I_x}={I_y}=220\) for producing alpha rhythm activity (\(\sim 10.87~{\text{Hz}}\)) (see additional details in Supplementary Material). For a set of simulations used later in the "Results" section, the connectivity strength \({C_{y \to x}}\) was taken in the range {50, 100, 150, 200, 250, 500} for a transfer delay of \(\tau =20\) ms, in order to compare the FC measures for the different values. We have also tested other values of the transfer delay parameter for consistency and similar results were obtained (see Fig. S9 in Supplementary Material). This system of SDDEs was numerically simulated using the Euler–Maruyama algorithm, which is appropriate for generating stochastic dynamics with Wiener processes (Higham 2001; Mao 2007; Touboul et al. 2012). Furthermore, this SDDEs system was also tested for analysis of stability and convergence as shown in Supplementary Material, Sect. 2. The stochastic integration was done with high time resolution (100 kHz or Δt = 0.01 ms) and later downsampled to 250 Hz using MATLAB custom code which is also provided in the Supplementary Material. Finally, the signals \(x(t)\) and \(y(t)\) are generated as the local potentials, \(x\left( t \right)={x_2}\left( t \right) - {x_3}\left( t \right)\) and \(y\left( t \right)={y_2}\left( t \right) - {y_3}\left( t \right)\), according to the Jansen and Rit (1995) model. Additionally, we also used a model-free simulation; particularly to test the robustness of EIC and iCOH measures for interacting signals with varying bandwidth (\(\varpi\)), transmission delay (\(\delta\)) and noise level. Following Gross et al. (2001), \(x\left( t \right)\) is simulated as a filtered white Gaussian noise at a frequency of interest (e.g. \(\omega =15.625\) or 1000/64 Hz) which was obtained using a narrow-band pass filter to extract out the frequency components of \(\omega \pm \varpi /2\) Hz, while \(y\left( t \right)\) is directly derived as its delayed version (\(y\left( t \right)=x(t - \delta )\)). These signals were mixed to produce signals \(u(t)\) and \(v(t)\) using the coefficients \({a_1},{b_1},{a_2},{b_2}\) as discussed above for the bivariate autoregressive and NMM. We created 100 trials of 1 s length (\({F_S}=250\) Hz, one time step is 4 ms) and collected time-series \(u(t)\) and \(v(t)\) in matrices of \(2 \times 250\) dimensions (\({{\varvec{Y}}_S}\epsilon {\mathcal{R}^{2 \times 250}}\)). White Gaussian noise (\({\varvec{U}} \in {\mathcal{R}^{2 \times 250}}\)) was added to render the measurements: $${{\varvec{Y}}_M}=\beta \frac{{{{\varvec{Y}}_S}}}{{\left\| {{{\varvec{Y}}_S}} \right\|}}+(1 - \beta )\frac{{\varvec{U}}}{{\left\| {\varvec{U}} \right\|}}$$ where we have used the Frobenious norm \(\left\| \cdot \right\|\) and \(0 \le \beta \le 1\) to effectively control the SNR ratio. The parameter \(\beta\) was selected in the range {0.9, 0.5, 0.1} to approximately generate recordings with 20, 0 and − 20 decibels. In our simulation study, considering that \(\omega =15.625\) Hz is the central frequency (one cycle per 64 milliseconds), we selected \(\delta\) in the range {0, 2, 4, 8, 16, 32}, correspondingly to time delays of 0, 8, 16, 32, 64 and 128 ms, respectively, or to interactions of 0, \(\pi /4\), \(\pi /2\), \(\pi\), \(2\pi\) and \(4\pi\)-phase differences. Lastly, \(\varpi\) was selected in the range {0.5, 1.0, 2.0, 5.0} Hz to create different scenarios where signals varied from narrow-band to broad-band. Realization of M/EEG Signals from Realistic Head/Source Model We introduce in this section more complex and realistic brain simulations for generating synthetic M/EEG signals. First, we use the SPM anatomical template with pre-computed meshes for internal/external skull, skin and cortical surfaces. The cortical surface consists of 20,484 vertices and 40,960 triangles that provided a detailed representation of subject's gyri and sulci formation as an excellent space for modelling activity and connectivity patterns in the brain. This choice is done for simplicity but it is also supported by the well known fact that pyramidal cells are the main contributors of M/EEG signals given their convenient pallisade structure and orientation within the cortical surface (Nunez and Srinivasan 2006). We also took the particular coordinates for an Elekta-Neuromag 102 magnetometers positions after corregistering appropriately with the anatomical image of a test subject, and computed a boundary element method leadfield using the Fieldtrip toolbox (Oostenveld et al. 2011). Although the realistic simulation study is limited to the MEG case, our conclusions can be extended to analogous EEG analysis given their similarities. We shall consider several cases in this part of our simulation study with signals generated using the MVAR and stochastic NMM. In particular, we simulate 3 and 5 dipoles or ROIs with their interactions as shown in Fig. 3. Dynamics were generated by extending the set of equations that were introduced above for bivariate models. In the MVAR case, for 5 ROIs, five equations were used by directly extending from Eq. (22) using the same autoregressive coefficients, while the connectivity (\(C\)) and transfer delay (\(\delta\)) values were selected as \({C_{1 \to 2}}={C_{1 \to 3}}={C_{1 \to 4}}={C_{4 \to 5}}=0.1\), \({C_{5 \to 4}}= - 0.1\), \({\delta _{1 \to 2}}=1\), \({\delta _{1 \to 3}}=2\), \({\delta _{1 \to 4}}=3\), \({\delta _{4 \to 5}}=5\), \({\delta _{5 \to 4}}=5\). For 3 ROIs, \({C_{1 \to 2}}={C_{2 \to 3}}=0.1\), \({C_{3 \to 2}}= - 0.1\), \({\delta _{1 \to 2}}=2\), \({\delta _{2 \to 3}}=3\), \({\delta _{3 \to 2}}=3\). These values were selected to satisfy the stability condition (Lütkepohl 2005) while setting a sufficiently high value for the connectivity parameter. Location of sources used for 5 ROIs (a) and 3 ROIs (b) based simulations. Insets: connectivity graph for each case For simulation using the SDDEs system, 30 and 18 equations are needed for the 5 and 3 ROIs, respectively (six equations per ROI). The NMM parameters are the same as in the bivariate simulation except for the connectivity strength (\(C\)) and transfer delay (\(\tau\)) values: \({C_{1 \to 2}}={C_{1 \to 3}}={C_{1 \to 4}}={C_{4 \to 5}}={C_{5 \to 4}}=200\), \({\tau _{1 \to 2}}=1\) ms, \({\tau _{1 \to 3}}=5\;{\text{ms}}\), \({\tau _{1 \to 4}}=10\;{\text{ms}}\), \({\tau _{4 \to 5}}=20\;{\text{ms}}\), \({\tau _{5 \to 4}}=20\;{\text{ms}}\) for 5 ROIs; and \({C_{1 \to 2}}={C_{2 \to 3}}={C_{3 \to 2}}=200\), \({\tau _{1 \to 2}}=1\;{\text{ms}}\), \({\tau _{2 \to 3}}=10\;{\text{ms}}\), \({\tau _{3 \to 2}}=10\;{\text{ms}}\) for 3 ROIs. Each ROI is represented as a single vertex in the cortical surface and its location is indicated by the red point overlaid on the cortical surface (see Fig. 3a, b). Most of the ROIs are located on the left hemisphere (left side of figure) and only ROI #5 in the first scenario is located in the right hemisphere. All interactions are unidirectional and feedforward except interactions between ROI #4 with ROI #5, and ROI #2 with ROI #3 in the first and second scenarios, respectively, reflecting recurrent or feedback connectivity. The latter was enforced to be more realistic with respect to true neuronal interactions despite the fact that it might have a negative impact on the FC estimation. In general, we generated 1 s long epoch simulation and repeated this 100 times (corresponding to 100 trials) to obtain consistent FC estimators. The simulated signals were centred per epoch and were used as the dynamics for the selected ROIs, accordingly, for the 5 and 3 ROIs which were shown in Fig. 3a, b, respectively. We also simulated background activity as white Gaussian noise at each of the remaining points in the cortical surface, separately for each point, and subsequently combined by controlling the ratio of the signal-to-background-noise activity: $${{\varvec{Y}}_B}=\alpha \frac{{{{\varvec{Y}}_{ROIs}}}}{{\left\| {{{\varvec{Y}}_{ROIs}}} \right\|}}+(1 - \alpha )\frac{{{{\varvec{Y}}_{BG}}}}{{\left\| {{{\varvec{Y}}_{BG}}} \right\|}},$$ where \({{\varvec{Y}}_B}\), \({{\varvec{Y}}_{ROIs}}\) and \({{\varvec{Y}}_{BG}}\) are \(Ns \times Nt\) matrices (\(Ns=102\) sensors and \(Nt=250\) samples corresponding to 1 s at \(Fs=250\;{\text{Hz}}\)) containing the time-series for the mixed signals, the signals directly originated from simulated neural activity at the 5 or 3 ROIs, and the background activity, respectively, generated using the magnetic leadfield. The parameter \(\alpha\) allows to effectively control the signal-to-background activity ratio and was selected in the range {0.1, 0.5, 0.9} to simulate different noise levels resembling − 20, 0 and 20 decibels (dB), respectively. Finally, we also have added measurement iid Gaussian white noise \({\varvec{U}}\), separately for each sensor, to produce more realistic synthetic MEG measurements by using the same strategy as above. That is, $${{\varvec{Y}}_{MEG}}=\beta \frac{{{{\varvec{Y}}_B}}}{{\left\| {{{\varvec{Y}}_B}} \right\|}}+(1 - \beta )\frac{{\varvec{U}}}{{\left\| {\varvec{U}} \right\|}},$$ where the SNR parameter was settled as \(\beta =0.9\) to represent a realistic situation, in which the sensors are well calibrated though measurement error is still present. Thus we were able to produce synthetic MEG signals, \({{\varvec{Y}}_{MEG}}\), which in turn were used in order to estimate the FC maps in the sensor space. In parallel, as the data will be observed only in sensor space, we have defined ROIs in this space corresponding to the actual source ROIs in the 5 and 3 ROIs scenarios. For example, Fig. 4 shows for the case when the 6 nearest sensors (KNS = 6) to each underlying source are considered. Later, in a ROC analysis we will consider this number as a free parameter to avoid bias. Although the influences are mostly unidirectional as in Fig. 3, the represented bidirectional arrows in Fig. 4 show that in the sensor space the association between two regions, as commonly reflected by FC methods, lack directionality. More generally, transitivity rule applies to FC measures as discussed here, e.g. \(x \to y\) and \(y \to z\) interactions might also lead to \(x \to z\) estimation, which is not shown in the expected interactions in Fig. 4 for clarity reasons. Nearest 6 sensors corresponding to underlying sources for the 5 ROIs (left) and 3 ROIs (right) based simulations. The encircled sensors are the nearest sensors to each of the underlying sources while the polygonal shapes enclose each ROI ROC Analysis of Recovered FC Networks For each particular FC measure, we defined the full FC map as the graph with nodes corresponding to the MEG sensors and edge weights corresponding to the magnitude of estimated FC values. This is a dense graph containing all the possible paired connections as all the weights have positive values. Using the full FC map as reference, a collection of sparse FC graphs \(m=0,1, \ldots ,M\) can be obtained using the \((100m{\text{/}}M){\text{th}}\) percentile to extract out those connections corresponding to higher weights, e.g. \(0{\text{th}}\), \(50{\text{th}}\) and \(100{\text{th}}\) percentiles denote the sparse FC maps corresponding to all, the 50% more relevant and none of the connections, respectively, as identified in the full FC map. Based on the simulated ground truth and selected K nearest sensors (KNS) ROIs, we can classify the sparse graph connections as true positive (TP) or false positive (FP), according to whether the identified connections connect two different predefined ROIs or not, for some given neighborhood size (e.g. ROIs as represented in Fig. 4 for KNS = 6). Consequently, we can obtain \(TP(m)\) and \(FP(m)\) measurements from each full FC map (see Figs. S12, S13 in Supplementary Material for an example of classification of full FC graph connections as TP/FP for increasing threshold values). To evaluate the performance of each estimated FC measure, we compute the classical receiver operator curve (ROC) and its area under the curve (\(0 \le AUC \le 1\)) statistics. The ROC is a non-decreasing graphical plot of the true positive rate (TPR) as a function of the false positive rate (FPR), where these quantities can be directly obtained from our analysis as \(TPR\left( m \right)=TP(m)/TP(0)\) and \(FPR\left( m \right)=FP(m)/FP(0)\). Proposed Normalization Procedure Improves iCOH Measure Figure 5a, b show the iCOH and the EIC envelope obtained directly from the complex-coherence (\(iCO{H_1}\) and \(EI{C_1}\)) and using the new normalization procedure introduced here (\(iCO{H_2}\) and \(EI{C_2}\)), respectively [see Eqs. (13), (14), (20) and (21)]. These measures were compared using time-series for two interacting sources that were generated using the bivariate autoregressive model in "Simulation of Source Activity with Autoregressive and Neural Mass Models" section. We considered time delays from 4 to 48 ms (\(\delta \epsilon \left\{ {1, \ldots ,12} \right\}\), time step is 4 ms) to induce changes in the phase difference between the interacting processes. Imaginary coherence (blue) and its envelope (red) as represented by two versions of iCOH and EIC. The classical complex coherence normalization step (a), and proposed HT-derived normalization procedure (b) are used. As a result, curve values appear normalized (magnitude values are equal or less than 1) for all frequency values (0–125 Hz). Upper and lower branches of the envelope are EIC curve and its horizontal mirror image (negative part), respectively. Measures were computed from model simulations with different communication delays \(\delta \epsilon \left\{ {1, \ldots ,12} \right\}\) for the processes u(t) and v(t) as represented in Fig. 1. Each delay time step constitutes 4 ms. Vertical black dashed line denotes 20 Hz, the dominant component of the simulated processes It is evident that the classic coherence normalization produces excessive ripples in the imaginary coherence derived EIC function (lags from 9 to 12 in second row). Additionally, the unique peak that should be obtained for the main component of 20 Hz is not stable for all the considered lags in Fig. 5a. The \(EI{C_1}\) peak appears at the right side of the 20 Hz line for lags from 1 to 3 and left side for subplots corresponding to lags from 6 to 8, and in lags from 9 to 12 we can observe up to two peaks. However, when we apply the HT-derived normalization, as for the \(EI{C_2}\) measure, the peak and curves become stable and unimodal. Importantly, as shown in Fig. 5b, the \(EI{C_2}\) peak is now rightly centered at the 20 Hz (black dashed) line. Due to the superior results, from now onwards we will refer implicitly to the \(EI{C_2}\) version wherever we discuss EIC results. Since iCOH with the new normalization also produced negligible FC for zero and \(\pi\)-phase interactions, as the original iCOH and similar waxing-waning irregular behaviour, we will henceforth only use the original formulation (Nolte et al. 2004). EIC Is Most Robust Among iCOH Indices for Bivariate FC Analysis The previous simulation based on a bivariate autoregressive model is also a fine example to show the robustness of EIC when compared to other iCOH related FC estimators. Similar to Fig. 5 example, Fig. 6 shows the iCOH and EIC curve but in separated rows, together with the ground truth, lCOH, PLI and wPLI estimators for the same simulated data. In the first row of Fig. 6, we show the golden true estimator (i.e. source-based coherence measure); whereas lCOH, iCOH, PLI, wPLI and EIC were estimated from the signals collected at the sensors (e.g. \(u(t)\) and \(v(t)\) represented in Fig. 1), the golden true estimator is the coherence measure that is obtained directly from the source signals (e.g. \(x(t)\) and \(y(t)\) in Fig. 1), which are unknown in a real scenario. The significance of FC values are determined by a threshold curve which was computed using the maximum (minimum) value statistics of FC values obtained from surrogate data (Lachaux et al. 1999). We used 1000 randomized samples in our simulation and computed this statistics for each frequency, separately. Different FC methods including the ground truth FC estimator for two interacting sources in a bivariate autoregressive model with varying communication delay and constant connectivity strength \({C_{y \to x}}=0.5\) [see Eq. (22)]. FC measures (blue curves) appear normalized according to their formulae so that the magnitude is ≤ 1 for all frequency values (0–60 Hz). A threshold curve and main frequency component are denoted with a red and vertical black dashed line in the subplots Notice that at the communication delays \(\delta =5\) and \(\delta =11\), (i.e. almost 25 and 50 ms delays, respectively, and consequently with signals' phase difference near zero or \(\pi\), modulus \(2\pi\)), lCOH, iCOH, PLI and wPLI produced negligible FC whereas EIC correctly reflected the true FC value. Also, except EIC the other FC methods exhibited a defective FC curve due to other negligible values that appeared, apparently, as a result of the interaction between ongoing and incoming oscillations. The most outstanding result shown is that EIC is the FC estimator that most closely resembled the golden true value as a consequence of the use of the HT operator to partially recover the ignored real part. On the other hand, for data generated from two interacting sources with the SDDEs system introduced above [see Eq. (23)], we tested different transfer delays and connectivity values to study the relationship between these parameters over the FC estimation. Figure 7a shows that for iCOH related indices (i.e. lCOH, iCOH, PLI, wPLI and EIC), the estimated FC strength at 10.87 Hz increased proportionally for higher values of the connectivity parameter and reached the maximum value for \({C_{y \to x}}=500\). At the same time, their FC estimates were non-significant for the lower values, \({C_{y \to x}}=50\) and \({C_{y \to x}}=100\), according to the surrogate-based statistics (Lachaux et al. 1999). This is consistent with the golden true estimated curve (shown at the first row) which also gradually increased with higher values of the connectivity parameter being significant for values \({C_{y \to x}} \ge 50\). Moreover, COH and PLV showed higher values around 10.87 Hz independently of the simulated connectivity strength, which is related to VC as further supported in the next example. In general, it can be noticed that EIC seems to be the smoothest across frequencies and the most stable estimator compared to the other methods, and was remarkably sharper for the estimation of the FC strength at the dominant frequency (i.e. 10.87 Hz); though the other FC indices also showed good results for this type of simulation. In the Supplementary Material, we showed the effect of varying the delay on the phase difference for a fixed connectivity strength, \({C_{y \to x}}=200\), which also demonstrated the superior performance of EIC (see Fig. S9). a Different FC methods for two interacting sources in a SDDEs based neural mass model with signal transmission delay \(\tau=20\;{\text{ms}}\) [Eq. (22)] for different values of connectivity strength. Measures appear normalized according to their formulae for each case so that the magnitude is ≤ 1 for all frequency values (0–25 Hz). Blue curve: FC function; red curve: surrogate based statistics; black dashed line: 10.87 Hz. b Similarly but when signals are uncoupled Interestingly, EIC seems to be affected by the surrogate-based statistics which overestimated the threshold at 10.87 Hz. The latter might be due to the failure to exactly recover the missing real part using the HT operator, particularly for estimating the normalization term. However, it may also arise as an effect of a highly stable synchronization which is characterized by an almost constant phase difference (Lachaux et al. 1999). The latter seems to be the more plausible explanation given that this situation did not appear for the EIC threshold curve shown in Fig. 6, and considering that the bivariate autoregressive model produces broad-band signals whereas the SDDE's signals have narrow-band characteristics. For PLI and wPLI, this statistics also showed relative higher values whereas it showed smaller values for lCOH, which did not affect the results. Next, we consider the specific case when there is no interaction by setting \({C_{y \to x}}=0\) in the simulation. In Fig. 7b it is clear that COH and PLV measures are prone to find spurious connections due to VC—as there should be none or very few points of the connectivity curve over the estimated cutoff. Otherwise, the iCOH related indices correctly measured the non-interaction. We shall henceforth narrow our study focusing mainly on iCOH indices based FC measures using more realistic simulated data. Finally, we explored the performance of iCOH and EIC measures only, using signals that were obtained as narrow-band filtered Gaussian white noise. As presented in "Simulation of Source Activity with Autoregressive and Neural Mass Models" section, we simulated the interaction between two processes for different values of the communication delay, filter bandwidth, and SNR to create different situations. Figure 8 showed that iCOH and EIC effectively ignored instantaneous interactions (1st column, lag = 0) for the different SNR and signal bandwidth values. At the frequency of interest (15.625 Hz), iCOH showed the higher values for lag = 2 (2nd column, π/4. phase difference) and lag = 4 (3rd column, \(\pi /2\) phase difference). For lags = 8, 16, 32 (corresponding to \(\pi\), \(2\pi\) and \(4\pi\)-phase interactions) and higher bandwidth values (\(\varpi =2.0,5.0\;{\text{Hz}}\)), iCOH showed negligible values as expected with a clear full oscillation about 15.625 Hz for \(\pi\)-phase difference; interestingly EIC showed a very clear peak at 15.625 Hz at these values. The only cases where EIC failed to find any interaction are in very noisy scenarios (SNR = − 20 dB) and if the signal bandwidth is too small (\(\varpi =0.5,1.0\;{\text{Hz}}\) in the simulations). In this analysis, we used only iCOH as representation of the other iCOH indices because they similarly failed for zero or \(\pi\)-phase interactions as evidenced earlier in Fig. 6. As a complement, we showed in the Supplementary Material (Fig. S10) the significance of the above results using the surrogate-based statistics. In the latter case, we used the same settings but simulating 100 and 1000 trials. FC measures (iCOH—blue curve, EIC—red curve) between two processes simulated from a filtered white Gaussian noise signal, and its delayed version, for a particular frequency of interest (15.625 Hz, vertical dashed black line) and a particular bandwidth. The results correspond to three different SNR levels: a 20, b 0 and c − 20 dB. Columns: subplots arranged according to simulated varying transfer delays from lag = 0–32 time instants. Rows: subplots arranged according to the simulated signals' bandwidths from 0.5 to 5.0 Hz EIC and iCOH Are the Most Accurate in Sensor Space Now, we demonstrate the methodology introduced here by using a synthetic MEG data generated with a large-scale model simulation as presented in "Realization of M/EEG Signals from Realistic Head/Source Model" section. In summary, we simulated MEG data for 100 trials using different MVAR's or SDDEs' generated signals as the dynamics for the selected 3 and 5 ROIs, as well as different realizations of Gaussian noise separately generated for each of the remaining cortical vertices and sensors, for modelling background activity and measurement noise. Specifically, the data for the ROIs, background and measurement noise signals, were added using Eqs. (25) and (26), to produce the MEG data that was used for the estimation of the FC methods under study. Additionally, we produced 100 Monte Carlo realizations of this process in order to compute the same amount of ROC curves and AUC statistics in the subsequent performance analysis of the FC measures. When creating the 100 Monte Carlo realizations, we kept the same SDDEs' data that was generated for all the trials to reduce computational cost, whereas the MVAR's simulated data as well as background and measurement signals were independently generated for each realization. In the following analysis, we have varied the connectivity threshold in the min–max range to produce ROC curves (not shown) as discussed in "ROC Analysis of Recovered FC Networks" section, and allowed sensor ROIs size to vary in the range KNS = 1–10 (only shown for the range from KNS = 6 to 10). Figure 9 shows boxplots graphs summarizing the outcome of the AUC values for the 100 realizations to compare among the FC methods for analyses corresponding to 3 and 5 ROIs, using signals generated with VAR and SDDEs models, and different SNR levels corresponding to − 20, 0 and 20 dB. In general, the results for KNS = 1, 2 are poor for all FC methods due to a higher variance and lower mean AUC (not shown), possibly as a consequence of a weak correspondence of the interaction among sources nearest sensors and the estimated predominant connections. However, for KNS = 6 onwards the results are stable with non-significant differences among higher KNS values. Per row, the panels' boxplots use the same y-axis scale so it can be possible to make some visual comparisons between the AUC values obtained for MVAR and SDDEs models; though it is also possible to visually find some differences among the outcome for the different SNR values, and also between 3 and 5 ROIs. This graphical outcome is better understood with the results shown in Tables 1, 2, 3 and 4 as discussed below. Boxplots of AUC values for 100 realizations using five different FC measures, two signal generation models, two ground truth scenarios and three SNR levels. The panels are arranged in two columns corresponding to signals generated using VAR and SDDE models (left and right columns). Across rows, the panels show the results when signals were generated using 3 or 5 ROIs, using different noise levels, i.e. rows 3A, 3B and 3C show the outcome for 3 ROIs using SNR = − 20 dB, SNR = 0 dB and SNR = 20 dB, respectively, and similarly for 5A, 5B and 5C. Per panel, the boxplots are grouped in five columns which corresponds to different sizes of the sensor neighbourhood (KNS = 6–10 for columns arranged from left to right) used to classify the connections in TP and FP, and thus compute the ROC and AUC values. Each of panels' columns contain five subplots corresponding to the different FC measures, lCOH, iCOH, PLI, wPLI and EIC, arranged in this order from left to right and highlighted with different colours Summary of non-parametric test #1, showing the SNR level(s) used in the simulations for which each FC measure (shown per row) produced higher significant AUC values for all possible combinations of ground truth scenarios and signal generation models (the latter two are interleaved across columns) − 20 dB (\(\alpha =0.1\)) versus 0 dB (\(\alpha =0.5\)) versus 20 dB (\(\alpha =0.9\)) 3 ROIs VAR (dB) SDDE (dB) lCOH − 20, 0 − 20 iCOH wPLI EIC If all the paired tests among the SNR levels are significant (using Bonferroni's correction, N = 60 pairwise comparisons), the shown value indicates the best SNR level (i.e. corresponding to higher significant AUC values); otherwise, the value indicate the "better" SNR levels (i.e. with higher AUC values but the comparison between them was non-significant) (e.g. for 3 ROIs, iCOH and SDDE combination, the simulation of SNR = − 20 dB, or correspondingly using parameter \(\alpha=0.1\), produced higher significant AUC values; with similar combination but for EIC, we found the higher AUC values for SNR = − 20 or 0 dB with non-significant differences between them) Summary of non-parametric tests #2 (first half of the table) and #3 (second half) Test #2: VAR's versus SDDE's signal generation models Test #3: 3 versus 5 ROIs − 20 dB SDDE Following the logic presented in Table 1, the value indicated in each cell corresponds to the population with higher median of AUC values for each analysis if the test is significant (Bonferroni's correction, N = 30 paired comparisons for both tests #2 and #3). Otherwise, the value indicates that the comparison between the two options was non-significant (NS) Score Win-W, Loss-L, Draw-D (W–L–D) results are shown for the pairwise comparisons among FC methods, together with the total accumulated W–L–D and points for the classical significance level α = 0.05 (first half) and Bonferroni's multiple comparison correction (second half) α = 0.05 Bonferroni correction (N = 120 pairs) Two different point accumulation systems are considered: (1) W adds 3 points and D adds 1 point like in the European football (e.g. Champions League (CL) competition), and (2) W adds 1 point and D adds 0.5 point like in a chess tournament Overall comparison among the FC methods: one versus all like in Athletics VAR's MEG generated signals SDDE's MEG generated signals SNR = − 20 dB (\(\alpha =0.1\)) lCOH, iCOH, wPLI, EIC iCOH, wPLI, EIC iCOH, EIC SNR = 0 dB (\(\alpha =0.5\)) SNR = 20 dB (\(\alpha =0.9\)) Bonferroni correction is used to control for multiple comparison (120 pairs). The best measure among iCOH indices is shown for each particular case for combination of three SNR levels, two ground truth scenarios, and two signal generation models. When there is not a clear winner (the best method is not significantly superior to its closest rivals), the group of tie-winners is shown We conclude our simulation study with a detailed statistical analysis of the differences among the simulated scenarios. Recall that in this part we are using five different FC measures (iCOH indices), three SNR levels (− 20, 0, 20 dB), two signals generation models (VAR and SDDEs), and two ground truth scenarios (3 and 5 ROIs). However, with respect to the outcome shown in Fig. 9, for each separated MC realization we are averaging the AUC values corresponding to KNS = 6–10 for all the possible simulated scenarios. For clarity, the analysis has been carried out as follows: Separately, for each combination of FC measure, signal generation model and ground truth scenario, compare AUC values for − 20 dB (\(\alpha =0.1\)) versus 0 dB (\(\alpha =0.5\)) versus 20 dB (\(\alpha =0.9\)). Separately, for each combination of FC measure, ground truth scenario and SNR level, compare AUC values for MVAR's versus SDDEs' FC outcome. Separately, for each combination of FC measure, signal generation model and SNR level, compare AUC values for 3 versus 5 ROIs' outcome. Separately, for each combination of signal generation model, ground truth scenario and SNR level, compare AUC values for paired FC measures, i.e. lCOH versus iCOH versus PLI versus wPLI versus EIC. The statistics used for tests 1–3 was the ranksum test which implements the two-sided Mann–Whitney U test (null hypotheses: equal medians) because the data used for computing each population samples differed between them. For test 4, we used the two-sided signed rank test (null hypotheses: median of paired samples differences is zero) because in this case the AUC samples were produced by applying different FC methods but each pair of matched samples was estimated from the same simulated data. For test #1, as evidenced in Fig. 9 and Table 1, AUC values were significantly higher when SNR = 0 dB for MVAR model for all iCOH indices, whereas for SDDEs model the best noise level was SNR = − 20 dB for most cases. Notice that SDDEs' generated signals have much narrower band compared to MVAR's, which then causes the FC estimates in this frequency band to be more tolerant to lower SNR. The outcome of Table 2 for test #2 (first half) is somewhat complementary to the above results since for the lowest SNR level (− 20 dB) the highest AUC values were obtained when using SDDEs compared to MVAR model. For SNR = 0 or 20 dB, highest significant AUC values were achieved for MVAR when 5 ROIs were simulated in most cases, whereas for 3 ROIs and SNR = 20 dB, again the best results were achieved for SDDEs model. Interestingly, test #3 outcome for 3 versus 5 ROIs comparison (Table 2, second half) showed that highest significant AUC values were obtained when 3 ROIs were simulated for SNR = − 20 or 0 dB, which can be interpreted as an increased difficulty for recovering underlying FC networks when more ROIs/interactions are involved. The comparison among the iCOH indices is conducted in test #4. We first do an overall summary of each pairwise comparison of two iCOH indices using an analogy with a sport competition where the FC method that produced highest AUC values is declared the winner of each comparison if the test is significant or both compared methods "draw" if it is non-significant. Then we can summarize across all 12 combinations (games) (i.e. 12 = 3 SNR levels × 2 generation models × 2 ground truth scenarios) where we have compared each pair. Table 3 shows these results including scores for the "competition" using two different scoring systems. We can observe that the two clear "winners" in this analysis are iCOH and EIC, which stand over the other FC measures. Moreover, Table 4 allows us to study in more detail the above result. In summary, we can observe that iCOH produced the best results for highest SNR (20 dB) whereas EIC was noticeably better for moderate SNR (0 dB). For the lowest SNR (− 20 dB), several methods but mainly iCOH and EIC produced better results. In this study, we have proposed a new technique (EIC) to circumvent the heavy reliance of imaginary coherence based FC methods (lCOH, iCOH, PLI, wPLI) on the imaginary part of the cross-spectral or complex coherence. EIC was stated as the absolute value of the analytical signal that was estimated from the iCOH function in the frequency domain, which approximately rendered an iCOH envelope. As a result, EIC inherited the resilience against VC effects. We used a simplified representation of the EEG/MEG forward problem [Fig. 1, and Eqs. (1) and (2)], to demonstrate that the idea of using the imaginary part was rightly justified given that only the imaginary part of the cross-spectrum of two sensor signals is directly related with the imaginary part of the cross-spectrum of two possible interacting underlying sources as shown in Eq. (5). The real part is contaminated due to VC and, thus, it is usually ignored by techniques such as lCOH, iCOH, PLI and wPLI, even though it contains important information. One immediate negative effect is that these measures show negligible connectivity when the phase difference of interacting processes is near zero or \(\pi\)-phase (modulus \(2\pi\)) (Stam et al. 2007; Vinck et al. 2011; O'Neill et al. 2017). Although the EIC method is estimated only from the imaginary term, we demonstrated that it is able to partially recover information from the real part (see Figs. 5, 6, 8). The main reason is that the EIC is based on the HT, which applied on the imaginary-part, is able to roughly produce its counterpart. Particularly, we showed with a simple example that the EIC curve can recover very well the magnitude spectrum that is, obviously, estimated using both the real and imaginary parts (Fig. 2). In practice, we have shown the superior performance of EIC versus other iCOH related indices using synthetic signals generated by bivariate autoregressive and SDDEs based NMM [see Eqs. (22) and (23)]. We extended these simulations and comparison framework for the study of more realistic simulations that produced synthetic MEG signals based on 3 and 5 simulated ROIs (Fig. 3), which in turn were used to evaluate the feasibility of FC analysis in sensor space using these techniques and a novel sensor-nearest ROIs based ROC analysis. EIC Versus Other iCOH Related Indices The main advantage of imaginary coherence indices (lCOH, iCOH, PLI, wPLI, EIC) is their robust performance in VC situations, though the usual iCOH measure proposed in the literature may be negatively affected by an unstable normalization as discussed in this work ("Proposed Normalization Procedure Improves iCOH Measure" section). That can also be claimed as a drawback for lCOH method (Pascual-Marqui et al. 2011), which uses the real part of the coherence in the denominator (normalization term) and thus its scale could be affected as result of VC and noise. PLI and wPLI did not suffer the same problem due to their exclusive dependency on the phase difference part and proper normalizations. On the other hand, the basic limitation of these measures is that they heavily rely on the imaginary part while directly ignoring any useful information that might be contained in the real part. As we demonstrated with simulations, the above methods effectively avoid spurious FC due to VC effects in the absence of true connectivity (Fig. 7b); however they also fail to capture true connectivity when that happen with zero or \(\pi\)-phase interactions (Figs. 6, 8). With the introduction of EIC we solved the latter problem to some extent; particularly we demonstrated with the simulation and results shown in Fig. 8 that EIC can capture true interactions despite of zero or \(\pi\)-phase interactions if the signals bandwidth is broad enough, while being robust to VC effects. With EIC method, we also highlighted the fact that lCOH, iCOH, PLI and wPLI are point-wise estimators given that their computations are made independently from single frequency entries. As can be seen in harmonic analysis of M/EEG signals, amplitude and phase tend to vary smoothly across frequency, thus taking into account such smoothness is essential to produce more robust estimators that can be more consistent, e.g. in noisy scenarios. From this perspective, EIC is potentially a more robust measure which exploits better the content of the imaginary part by implicitly using the HT [see Eq. (19)]. The impact of time-delay and the connectivity strength parameter on the coupling of two oscillators has been well studied in the literature (Dhamala et al. 2004; Gollo et al. 2014; Strogatz 2000). Here we studied both parameters using bivariate autoregressive and SDDEs based NMM and found that only the information transfer delay has a visible impact in the phase difference of interacting oscillators. The main effect of the connectivity strength is that at least a minimum value is required to guarantee synchronization of the ongoing activity as shown in Fig. 7a. However, the problem of negligible connectivity found by iCOH indices may appear in more complex scenarios and not only caused by time delay, which could hinder interpretation (see Fig. 6 and discussion therein). Our newly proposed EIC method was almost non-affected by a varying transfer delay as a consequence of exploiting the smooth variability across the frequency domain. Consequently, EIC showed more resilience than other iCOH-derived methods, which may translate into improved FC estimation for real M/EEG data analysis. We presented here the EIC measure based on the HT, but any operator that could produce a robust envelope can do a similar work. The HT is attractive because of its mathematical properties and it is particularly useful for computing the envelope of band-limited oscillators. Our objective was to "recover" the real part of underlying interacting processes' signal complex coherence when we can rely only in a good estimation of its imaginary part. Assuming that the real-part could be approximately recovered by using and integrating the content of the imaginary part, the HT can produce the desired effect. In other context, it has always been questionable to use linear estimators to study inherently nonlinear systems such as brain dynamics. In this sense coherence based measures enjoy a nice duality: on the one hand they are formulated directly using linear transforms; but on the other, they are also directly represented in the form of harmonics which are ideal for studying stationary signals regardless of their linear or nonlinear origins. Even in more complex nonlinear/non-stationary systems analyses, these techniques could find useful applications given their flexibility and properties based on established mathematical theory (Bendat and Piersol 2011; Oppenheim et al. 1983). We have tested the robustness of coherence based FC measures using autoregressive (linear) and neural mass (nonlinear) models. In the more complex scenario of nonlinear dynamics, we tested bivariate as well as interactions among 3 and 5 ROIs in realistic brain simulations. In general, iCOH indices showed robustness in nonlinear situations and, particularly, our proposed EIC method showed stable, accurate and superior results for most cases. FC Sensor Based Approach Validation with Large-Scale Synthetic Data With a large-scale simulation that produced synthetic MEG data, a comparative framework among iCOH indices was presented to extend our study to a more complex and realistic scenario. We used MVAR and SDDEs based simulations to evaluate the performance of all these measures and, particularly, the validity of the FC approach in the sensor space. In general, we were able to show with different configurations based on signals generated using two ground truth scenarios (3 and 5 ROIs), two signals generation models (MVAR and SDDEs), and three SNR levels (− 20, 0 and 20 dB), together with a novel sensor-nearest ROIs based ROC analysis ("ROC Analysis of Recovered FC Networks" section), that the FC estimation in sensor space could provide a good approximation for the map of true connections, particularly with the use of iCOH and EIC techniques (see Fig. 9 and discussion therein). As an important conclusion, we found that the original iCOH technique (Nolte et al. 2004) was one of the best methods of our FC analysis. This is surprising if we realize that PLI and wPLI are built on top of iCOH, and consequently we may expect superior results for PLI and wPLI. Specifically, iCOH is derived plainly from theoretical arguments whereas PLI and wPLI add extra information that empirically should improve their estimators, but these latter transformations seem to cause loss of valuable information as shown by our simulation results. In our study, lCOH was the method with the 3rd highest performance though "lagging significantly" behind of iCOH and EIC according to the results shown in Tables 3 and 4. Unlike PLI and wPLI, lCOH is strictly derived from theoretical arguments (Pascual-Marqui et al. 2011) without extra transformations. Otherwise, EIC also adds extra information to the iCOH content like PLI and wPLI, but in contrast it seems that the EIC use of HT can indeed improve the iCOH estimator, especially under conditions such as broad band signals with moderate noise level. Interestingly, our study shows that the presence of noise can "obscure the visibility" of more distant sensors [with lower scale factor; see Eq. (5)]. Hence, some moderate level of noise is necessary to render good results, whereas too much noise will mask the signal. This is the case for the results shown with the MVAR model where we obtained the best results for SNR = 0 dB but also for SDDE case which was more robust to noise than MVAR (Fig. 9; Tables 1, 2, 3, 4). An essential step in our study was the use of a heuristic approach based on the ROIs created from sensors in the nearest neighborhood of simulated sources. An important justification for the latter is that the separation of a local dipolar source from nearby sensors has a worst negative impact than its particular dipole orientation (Hillebrand and Barnes 2002). Therefore, we assumed that the closest sensors signals contain a good representation of the underlying cortical neural dynamics. The use of this heuristic allowed us to develop a novel sensor-nearest ROIs based ROC analysis to evaluate the performance of FC methods under study. As demonstrated using this approach, the EIC method could be particularly useful to estimate true interactions among large areas, e.g. brain lobes, but it can also be important to detect short-range connectivity as well. Furthermore, as evidenced by the significantly high ROC's area under the curve values (Fig. 9), and the connectivity distribution of thresholded FC maps that were used for computing the ROC statistics (e.g. see Figs. S12, S13 in Supplementary Material), we believe that the estimation of sensor-based FC can help to disclose the map of brain region interactions (see also Ewald et al. 2012; Hardmeier et al. 2014; Nolte et al. 2004; Stam et al. 2007; Vinck et al. 2011). However, in our simulation study, we noted that recurrent connections, e.g. between ROIs 4 and 5 in the simulation with 5 ROIs, were most difficult to estimate. That may be due to the simulated counter-phase interactions, which can also negatively combine with the dipoles orientation, possibly causing a biased projection in sensor space that was worsened by the interaction of simultaneous active (anti-phase) dipoles. The latter observation is rooted on the fact that similar recurrent interactions, e.g. between ROIs 2 and 3 in the simulation with 3 ROIs, was much better estimated. This problem may be worsened in practice when using standard iCOH indices as they cannot capture well zero or \(\pi\)-phase (modulus \(2\pi\)) interactions as a consequence of simply relying on the imaginary part. As discussed here, in this situation the EIC method should produce more accurate FC maps according to our simulation analysis using narrow-band and broad-band interacting signals (see Fig. 8 and discussion therein). In general, we observed that iCOH and EIC can capture well the FC as reflected in sensor space; however we have to be cautious with the presence of false connections, which has a dramatic negative effect due to the lack of knowledge about the delimitation and extension of unknown interacting areas. Limitation of Sensor-Based FC Approach and Future Work According to our results with simulated data, the sensor-based FC approach using iCOH indices has the potential to uncover medium and short-range connectivity; though the complex dynamics of the brain (e.g. nonlinear interactions among regions in deep/superficial and more/less central areas) are actually oversimplified in the connectivity maps observed at the sensor level, which obviously hinder the application of any technique. However, if we have a priori information of active brain regions, and if there is a clear and non-overlapped localization for these ROIs, then FC analysis based on imaginary coherence methods, particularly iCOH and EIC, can provide useful information about the interacting neural population as shown here. An important alternative to sensor-based FC analysis is to estimate the source activity and its FC which will eventually allow us to combine information from different imaging modalities, including EEG and MEG's magnetometers and planar gradiometers, as well as fMRI and other data. For the case of M/EEG data as discussed in this work, several issues still must be overcome to make critical progress in source-based FC analysis, i.e. control of signal leakage, signal mixing and other VC effects. Nevertheless, the generality of our proposed methodology and its robustness to VC, would facilitate its application to source-based FC analysis, which will be important to study normal and diseased brain activity. This work was supported by the Northern Ireland Functional Brain Mapping Project Facility (1303/101154803), funded by Invest Northern Ireland and the University of Ulster. 10548_2018_640_MOESM1_ESM.docx (7.4 mb) Supplementary material 1 (DOCX 7598 KB) 10548_2018_640_MOESM2_ESM.rar (14 kb) Supplementary material 2 (RAR 14 KB) Baccalá L, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474. https://doi.org/10.1007/PL00007990 CrossRefPubMedGoogle Scholar Bendat J, Piersol A (2011) Random data: analysis and measurement procedures. Wiley, New YorkGoogle Scholar Brookes MJ, O'neill GC, Hall EL, Woolrich MW, Baker A, Corner SP, Robson SE, Morris PG, Barnes GR (2014) Measuring temporal, spectral and spatial changes in electrophysiological brain network connectivity. Neuroimage 91:282–299CrossRefGoogle Scholar Buzsáki G, Draguhn A (2004) Neuronal oscillations in cortical networks. Science 304:1926–1929. https://doi.org/10.1126/science.1099745 CrossRefPubMedGoogle Scholar Cho J, Vorwerk J, Wolters CH, Knösche TR (2015) Influence of the head model on EEG and MEG source connectivity analyses. Neuroimage 110:60–77. https://doi.org/10.1016/j.neuroimage.2015.01.043 CrossRefPubMedGoogle Scholar Colclough GL, Brookes MJ, Smith SM, Woolrich MW (2015) NeuroImage A symmetric multivariate leakage correction for MEG connectomes. Neuroimage 117:439–448. https://doi.org/10.1016/j.neuroimage.2015.03.071 CrossRefPubMedPubMedCentralGoogle Scholar Dannhauer M, Lanfer B, Wolters CH, Knösche TR (2011) Modeling of the human skull in EEG source analysis. Hum Brain Mapp 32:1383–1399. https://doi.org/10.1002/hbm.21114 CrossRefPubMedGoogle Scholar Dhamala M, Jirsa VK, Ding M (2004) Enhancement of neural synchrony by time delay. Phys Rev Lett 92:74104. https://doi.org/10.1103/PhysRevLett.92.074104 CrossRefGoogle Scholar Ewald A, Marzetti L, Zappasodi F, Meinecke FC, Nolte G (2012) Estimating true brain connectivity from EEG/MEG data invariant to linear and static transformations in sensor space. Neuroimage 60:476–488. https://doi.org/10.1016/j.neuroimage.2011.11.084 CrossRefPubMedGoogle Scholar Fries P (2005) A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends Cogn Sci 9:474–480. https://doi.org/10.1016/j.tics.2005.08.011 CrossRefPubMedGoogle Scholar Friston K, Harrison L, Daunizeau J, Kiebel S, Phillips C, Trujillo-Barreto N, Henson R, Flandin G, Mattout J (2008) Multiple sparse priors for the M/EEG inverse problem. Neuroimage 39:1104–1120. https://doi.org/10.1016/j.neuroimage.2007.09.048 CrossRefPubMedGoogle Scholar Gollo LL, Mirasso C, Sporns O, Breakspear M (2014) Mechanisms of zero-lag synchronization in cortical motifs. PLoS Comput Biol. https://doi.org/10.1371/journal.pcbi.1003548 CrossRefPubMedPubMedCentralGoogle Scholar Grandchamp R, Delorme A (2011) Single-trial normalization for event-related spectral decomposition reduces sensitivity to noisy trials. Front Psychol. https://doi.org/10.3389/fpsyg.2011.00236 CrossRefPubMedPubMedCentralGoogle Scholar Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37:424–438. https://doi.org/10.2307/1912791 CrossRefGoogle Scholar Gross J, Kujala J, Hamalainen M, Timmermann L, Schnitzler A, Salmelin R (2001) Dynamic imaging of coherent sources: studying neural interactions in the human brain. Proc Natl Acad Sci 98:694–699. https://doi.org/10.1073/pnas.98.2.694 CrossRefPubMedGoogle Scholar Guggisberg AG, Homma SM, Findlay AM, Dalal SS, Kirsch HE, Berger MS, Nagarajan SS (2008) Mapping functional connectivity in patients with brain lesions. Ann Neurol 63:193–203. https://doi.org/10.1002/ana.21224.Mapping CrossRefPubMedPubMedCentralGoogle Scholar Hämäläinen MS, Ilmoniemi RJ (1994) Interpreting magnetic fields of the brain: minimum norm estimates. Med Biol Eng Comput 32:35–42. https://doi.org/10.1007/BF02512476 CrossRefPubMedGoogle Scholar Hardmeier M, Hatz F, Bousleiman H, Schindler C, Stam CJ, Fuhr P (2014) Reproducibility of functional connectivity and graph measures based on the phase lag index (PLI) and weighted phase lag index (wPLI) derived from high resolution EEG. PLoS ONE 9:e108648. https://doi.org/10.1371/journal.pone.0108648 CrossRefPubMedPubMedCentralGoogle Scholar Haufe S, Ewald A (2016) A simulation framework for benchmarking EEG-based brain connectivity estimation methodologies. Brain Topogr. https://doi.org/10.1007/s10548-016-0498-y CrossRefPubMedGoogle Scholar Haufe S, Nikulin VV, Müller K-R, Nolte G (2013) A critical assessment of connectivity measures for EEG data: a simulation study. Neuroimage 64:120–133. https://doi.org/10.1016/j.neuroimage.2012.09.036 CrossRefPubMedGoogle Scholar Higham DJ (2001) An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev 43:525–546CrossRefGoogle Scholar Hillebrand A, Barnes GR (2002) A quantitative assessment of the sensitivity of whole-head MEG to activity in the adult human cortex. Neuroimage 16:638–650. https://doi.org/10.1006/nimg.2002.1102 CrossRefPubMedGoogle Scholar Huang M-X, Huang CW, Robb A, Angeles A, Nichols SL, Baker DG, Song T, Harrington DL, Theilmann RJ, Srinivasan R, Heister D, Diwakar M, Canive JM, Edgar JC, Chen Y-H, Ji Z, Shen M, El-Gabalawy F, Levy M, McLay R, Webb-Murphy J, Liu TT, Drake A, Lee RR (2014) MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images. Neuroimage 84:585–604. https://doi.org/10.1016/j.neuroimage.2013.09.022 CrossRefPubMedGoogle Scholar Izhikevich EM, Edelman GM (2008) Large-scale model of mammalian thalamocortical systems. Proc Natl Acad Sci USA 105:3593–3598. https://doi.org/10.1073/pnas.0712231105 CrossRefPubMedGoogle Scholar Jansen BH, Rit VG (1995) Biological cybernetics in a mathematical model of coupled cortical columns. Biol Cybern 366:357–366CrossRefGoogle Scholar Jensen O, Kaiser J, Lachaux JP (2007) Human gamma-frequency oscillations associated with attention and memory. Trends Neurosci 30:317–324. https://doi.org/10.1016/j.tins.2007.05.001 CrossRefPubMedGoogle Scholar Lachaux J-P, Rodriguez E, Martinerie J, Varela FJ (1999) Measuring phase synchrony in brain signals. Hum Brain Mapp 8:194–208. https://doi.org/10.1002/(SICI)1097-0193(1999)8:4<194::AID-HBM4>3.0.CO;2-C.CrossRefGoogle Scholar Lanfer B, Jordanov IP, Scherg M, Wolters CH (2012a) Influence of interior cerebrospinal fluid compartments on EEG source analysis. Biomed Tech 57:623–626. https://doi.org/10.1515/bmt-2012-4020 CrossRefGoogle Scholar Lanfer B, Scherg M, Dannhauer M, Knösche TR, Burger M, Wolters CH (2012b) Influences of Skull Segmentation Inaccuracies on EEG Source Analysis. Neuroimage 62:418–431CrossRefGoogle Scholar Lopes da Silva F (2013) EEG and MEG: Relevance to neuroscience. Neuron 80:1112–1128. https://doi.org/10.1016/j.neuron.2013.10.017 CrossRefPubMedGoogle Scholar Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, BerlinCrossRefGoogle Scholar Makeig S, Debener S, Onton J, Delorme A (2004) Mining event-related brain dynamics. Trends Cogn Sci 8:204–210. https://doi.org/10.1016/j.tics.2004.03.008 CrossRefPubMedGoogle Scholar Mao X (2007) Stochastic differential equations and applications, 2nd edn. Elsevier, AmsterdamGoogle Scholar Menendez GP, Andino RGonzalez, Lantz S, Michel G, Landis CM, T (2001) Noninvasive localization of electromagnetic epileptic activity. I. Method descriptions and simulations. Brain Topogr 14:131–137. https://doi.org/10.1023/A:1012944913650 CrossRefGoogle Scholar Nolte G, Bai O, Wheaton L, Mari Z, Vorbach S, Hallett M (2004) Identifying true brain interaction from EEG data using the imaginary part of coherency. Clin Neurophysiol 115:2292–2307. https://doi.org/10.1016/j.clinph.2004.04.029 CrossRefPubMedGoogle Scholar Nolte G, Ziehe A, Nikulin VV, Schlögl A, Krämer N, Brismar T, Müller K-R (2008) Robustly estimating the flow direction of information in complex physical systems. Phys Rev Lett 100:234101. https://doi.org/10.1103/PhysRevLett.100.234101 CrossRefPubMedGoogle Scholar Nunez PL, Srinivasan R (2006) Electric fields of the brain: the neurophysics of EEG. Oxford University Press, OxfordCrossRefGoogle Scholar Nunez PL, Srinivasan R, Westdorp AF, Wijesinghe RS, Tucker DM, Silberstein RB, Cadusch PJ (1997) EEG coherency I: statistics, reference electrode, volume conduction, Laplacians, cortical imaging, and interpretation at multiple scales. Electroencephalogr Clin Neurophysiol 103:499–515. https://doi.org/10.1016/S0013-4694(97)00066-7 CrossRefPubMedGoogle Scholar O'Neill GC, Barratt EL, Hunt BAE, Tewarie PK, Brookes MJ (2015) Measuring electrophysiological connectivity by power envelope correlation: a technical review on MEG methods. Phys Med Biol 60:R271. https://doi.org/10.1088/0031-9155/60/21/R271 CrossRefPubMedGoogle Scholar O'Neill GC, Tewarie P, Vidaurre D, Liuzzi L, Woolrich MW, Brookes MJ, (2017) Dynamics of large-scale electrophysiological networks: a technical review. Neuroimage. https://doi.org/10.1016/j.neuroimage.2017.10.003 CrossRefPubMedPubMedCentralGoogle Scholar Olde Dubbelink KTE, Hillebrand A, Stoffers D, Deijen JB, Twisk JWR, Stam CJ, Berendse HW (2014) Disrupted brain network topology in Parkinson's disease: a longitudinal magnetoencephalography study. Brain 137:197–207. https://doi.org/10.1093/brain/awt316 CrossRefPubMedGoogle Scholar Oostenveld R, Fries P, Maris E, Schoffelen JM (2011) FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput Intell Neurosci. https://doi.org/10.1155/2011/156869 2011.CrossRefPubMedPubMedCentralGoogle Scholar Oppenheim A, Willsky A, Nawab S (1983) Signals and systems. Prentice-Hall, Englewood CliffsGoogle Scholar Palva S, Palva JM (2012) Discovering oscillatory interaction networks with M/EEG: challenges and breakthroughs. Trends Cogn Sci 16:219–229. https://doi.org/10.1016/j.tics.2012.02.004 CrossRefPubMedGoogle Scholar Pascual-Marqui RD (2007) Discrete, 3D distributed, linear imaging methods of electric neuronal activity. Part 1: exact, zero error localization. Neurosci Lett 485:198–203. https://doi.org/10.1016/j.neulet.2010.09.011 CrossRefGoogle Scholar Pascual-Marqui RD, Lehmann D, Koukkou M, Kochi K, Anderer P, Saletu B, Tanaka H, Hirata K, John ER, Prichep L, Biscay-Lirio R, Kinoshita T (2011) Assessing interactions in the brain with exact low-resolution electromagnetic tomography. Philos Trans R Soc A Math Phys Eng Sci 369:3768–3784. https://doi.org/10.1098/rsta.2011.0081 CrossRefGoogle Scholar Polanía R, Nitsche MA, Korman C, Batsikadze G, Paulus W (2012) The importance of timing in segregated theta phase-coupling for cognitive performance. Curr Biol 22:1314–1318. https://doi.org/10.1016/j.cub.2012.05.021 CrossRefPubMedGoogle Scholar Ringo JL, Doty RW, Demeter S, Simard PY (1994) Time is of the essence: a conjecture that hemispheric specialization arises from interhemispheric conduction delay. Cereb Cortex 4:331–343. https://doi.org/10.1093/cercor/4.4.331 CrossRefPubMedGoogle Scholar Rodriguez E, George N, Lachaux J-P, Martinerie J, Renault B, Varela FJ (1999) Perception's shadow: long-distance synchronization of human brain activity. Nature 397:430–433. https://doi.org/10.1038/17120 CrossRefPubMedGoogle Scholar Schnitzler A, Gross J (2005) Normal and pathological oscillatory communication in the brain. Nat Rev Neurosci 6:285–296. https://doi.org/10.1038/nrn1650 CrossRefPubMedGoogle Scholar Schoffelen JM, Gross J (2009) Source connectivity analysis with MEG and EEG. Hum. Brain Mapp 30:1857–1865. https://doi.org/10.1002/hbm.20745 CrossRefGoogle Scholar Shaw C (1984) Correlation and coherence analysis a selective tutorial review of the eeg. Int J Psychophysiol 1:255–266. https://doi.org/10.1016/0167-8760(84)90045-X CrossRefPubMedGoogle Scholar Siems M, Pape A, Hipp JF, Siegel M (2016) Measuring the cortical correlation structure of spontaneous oscillatory activity with EEG and MEG. Neuroimage 129:345–355. https://doi.org/10.1016/j.neuroimage.2016.01.055 CrossRefPubMedGoogle Scholar Simoes C, Jensen O, Parkkonen L, Hari R (2003) Phase locking between human primary and secondary somatosensory cortices. Proc Natl Acad Sci 100:2691–2694. https://doi.org/10.1073/pnas.0437944100 CrossRefPubMedGoogle Scholar Singer W (1999) Neuronal synchrony: a versatile code for the definition of relations? Neuron 24:49–65. https://doi.org/10.1016/S0896-6273(00)80821-1 CrossRefPubMedGoogle Scholar Stam CJ, van Straaten ECW (2012) The organization of physiological brain networks. Clin Neurophysiol 123:1067–1087. https://doi.org/10.1016/j.clinph.2012.01.011 CrossRefPubMedGoogle Scholar Stam CJ, Jones BF, Manshanden I, van Cappellen van Walsum AM, Montez T, Verbunt JPA, de Munck, JC, van Dijk BW, Berendse HW, Scheltens P (2006) Magnetoencephalographic evaluation of resting-state functional connectivity in Alzheimer's disease. Neuroimage 32:1335–1344. https://doi.org/10.1016/j.neuroimage.2006.05.033 CrossRefPubMedGoogle Scholar Stam CJ, Nolte G, Daffertshofer A (2007) Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Hum Brain Mapp 28:1178–1193. https://doi.org/10.1002/hbm.20346 CrossRefPubMedGoogle Scholar Stam CJ, de Haan W, Daffertshofer A, Jones BF, Manshanden I, van Cappellen van Walsum AM, Montez T, Verbunt JPA, de Munck, JC, van Dijk BW, Berendse HW, Scheltens P (2008) Graph theoretical analysis of magnetoencephalographic functional connectivity in Alzheimer's disease. Brain 132:213–224. https://doi.org/10.1093/brain/awn262 CrossRefPubMedGoogle Scholar Strogatz SH (2000) From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D 143:1–20. https://doi.org/10.1016/S0167-2789(00)00094-4 CrossRefGoogle Scholar Tallon-Baudry C, Bertrand O (1999) Oscillatory gamma activity in humans and its role in object representation. Trends Cogn Sci 3:151–162. https://doi.org/10.1016/S1364-6613(99)01299-1 CrossRefPubMedGoogle Scholar Touboul J, Hermann G, Faugeras O (2012) Noise-induced behaviors in neural mean field dynamics. SIAM J Appl Dyn Syst 11:49–81CrossRefGoogle Scholar Van Veen B, Van Drongelen W, Yuchtman M, Suzuki A (1997) Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 44:867–880. https://doi.org/10.1109/10.623056 CrossRefPubMedGoogle Scholar Van de Steen F, Faes L, Karahan E, Songsiri J, Valdes-Sosa PA, Marinazzo D (2016) Critical comments on EEG sensor space dynamical connectivity analysis. Brain Topogr. https://doi.org/10.1007/s10548-016-0538-7 CrossRefPubMedGoogle Scholar Vinck M, Oostenveld R, van Wingerden M, Battaglia F, Pennartz CMA (2011) An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias. Neuroimage 55:1548–1565. https://doi.org/10.1016/j.neuroimage.2011.01.055 CrossRefPubMedGoogle Scholar Vorwerk J, Clerc M, Burger M, Wolters C (2012) Comparison of boundary element and finite element approaches to the EEG forward problem. Biomed Tech 57:795–798Google Scholar Vorwerk J, Cho J, Rampp S, Hamer H, Thomas R, Wolters CH (2014) A guideline for head volume conductor modeling in EEG and MEG. Neuroimage 100:590–607CrossRefGoogle Scholar Wheaton La, Nolte G, Bohlhalter S, Fridman E, Hallett M (2005) Synchronization of parietal and premotor areas during preparation and execution of praxis hand movements. Clin Neurophysiol 116:1382–1390. https://doi.org/10.1016/j.clinph.2005.01.008 CrossRefPubMedGoogle Scholar Zygmund A (2002) Trigonometric series, 3rd edn. Cambridge University Press, CambridgeGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Northern Ireland Functional Brain Mapping Facility, Intelligent Systems Research Centre, School of Computing and Intelligent SystemsUlster UniversityDerry~LondonderryUK 2.Department of Neurosciences, School of Medical Sciences/Hospital Universiti Sains MalaysiaUniversiti Sains MalaysiaKota BharuMalaysia Sanchez Bornot, J.M., Wong-Lin, K., Ahmad, A.L. et al. Brain Topogr (2018) 31: 895. https://doi.org/10.1007/s10548-018-0640-0
CommonCrawl
Memorable Meetups Twas my distinct pleasure to join a party of earnest high school teachers in a meeting with the PSU Middle East Center at Tarboush this evening. I showed up late, given other pressing engagements, but one of the teachers, from Lincoln High, had decided to stay on and have a real dinner (this is a top notch Lebanese restaurant). I joined in with Dr. Tagrid Khuri, who had invited me to this event and whom I'd not seen for quite a long time. We all had our stories to tell, our adventures in that part of the world. Dr. Tag, as she is affectionately known by a large Arab-speaking community, has the most up to date experience, being Jordanian (currently) with plenty of reasons to visit friends in Amman. The high school teacher and I hadn't been to the Middle East in a long time. Not counting Egypt, the last time I was in the Jerusalem area was when Bobby Fischer was contending with Boris Spassky for the title of world champion at chess. That was a long time ago, I think those still living might agree. Next, I adjourned to Greater Trumps for a meeting with Synchronfile, if metonymy may be permitted. As usual, the futuristic gadgets were on and ablaze, at least for part of our meetup. Trevor is a serious scholar and top ranking Esozone type here in Portland. His interest in the restoration of Dymaxion Car 2, the model for the newly minted Dymaxion Car 4, a project undertaken by Lord Norman Foster, has been more than just casual. Not atypically, Trevor expressed his admiration and respect for Joe Moore, another independent scholar doing valuable work. A Scholar Talks :: opening number :: I showed up at the Unitarian Church prepared to enjoy Rabbi Michael Lerner and was not disappointed. I did some speed reading in his book through the opening numbers and then pretty much listened in rapt attention, through the Q&A. I surprised myself in electing to drive the taxi, which I rarely do off duty, not that it's a registered commercial taxi or anything. This blog has its namespace. The guy won me over when he went out on a limb and expressed his fondest hope, which was that statism would go away and we would finally start dealing with the planet's ecological issues in a more mature manner, more befitting this self-professed "sapien" status. In the meantime, we could stay in the dark ages with some two state solution for the Israel / Palestine identity problem, keep it schizo. Einstein had hoped for a similar scenario. I noticed Michael didn't include Einstein in his index, and yet his fear-versus-longing analysis (we're each somewhere on the spectrum) is pure Einstein, through Bucky. So in announcing his "no state solution", I thought Lerner was overtly joining the transcendentalist school, a mark of his spiritual progress. The book is a winding tale from the crusades forward, to just a few months ago. Lerner, like Kierkegaard, rejects the voice of the Objective Historian as a mask, and admits his bias up front: to tell the story in such a way that greater happiness might still be a possibility. He's not about closing doors. His message is a lot like the Dalai Lama's when it comes to happiness, so I could easily see why Bishop Tutu liked his book (the latter being a big fan of DL XIV). In Lerner's view, we each oscillate between a dog eat dog hell and a heaven wherein people actually love one another and are adept at community. Both world views are self-reinforcing. He names them the right and left hand of God respectively. Thanks for another great cue Suzanne, and bon voyage. Various scene changes are in progress. Lindsey is methodically whittling away at her stash of accumulated treasures. She kindly donated her Gulfstream pen collection to Blue House, along with a DVD on the G650, which I filed on the top shelf next to Torture Taxi, a Gothic tale. Melody is wearing gas station looking overalls like from the movie eXistenZ, which she's seen, and agrees we should share with Lindsey. Jen has been working hard too. I don't always know what's going on as I'm part time in the MJ Chair of CompSci over at Open Bastion, either grading for OST or reading this new book on Wittgenstein and Weinenger, or some other treasure. Not watching TV, that's for sure. Dave Koski has been doing an interesting toon branching off the Richard Hawkins hypertoon at Grunch.net, involving that flapping tetrahedron (the opening sequence). He'd unearthed Piero della Francesca's formula for the volume of a tetrahedron given its six edges, and whittled it down to one edge changing (f, for flap), the others set to the constant 2, as in 2 radii. The two equilateral triangles flap in the wind, like butterfly wings or pointy book covers, with a shared hinge or spine. When f = 2, we have our regular tetrahedron. There's a parabola of volumes as all-but-f are held constant. Derivations of P, Q and R modules (mnemonic: peculiar) were forthcoming, leading off into other areas (as hypertoons do). These are the kinds of reveries to pipe to the Coffee Shops Network, to shared screens or laptops, from Youtube playlists, from secret sources (like with secret sauces). One needs that bridging talent space found at Bridges (the conference) between art-math and science, and that includes the arts of computer programming and animation (anime). Python.TV is a likely stash point if you want to check back. Hypertoons were originally implemented in Visual Python after Hawkins encouraged me to enter a contest for an SGI workstation. Tara is planning her scene change as well, with the so-called "common app" staring her in the face. We both had to get government PINs to sign the FAFSA. Parents of college aged North Americans get to wade through a new labyrinth hammered together in cyberspace, though it's probably different in the state of Canada. I've got the Facebook scrolls for working with Friends, in addition to these journals. Most my remarks on recent news, with citations to stories, are happening there. If Pakistan renounces nukes and asks to sign the NPT as a non-NWS, that could undermine India's credibility as a moral leader in the West, where the Countdown to Zero campaign has taken hold with a vengeance. I don't think that's likely at the federal level (in Pakistan) but the desire among young Muslim faithful to ban the bomb is quite sincere, and currently consistent with Iranian rhetoric, which is why some Christian recruiters have had to flip their position, even among the evangelicals (to be Christian and "for the bomb" just sounds moronic as a wine and cheese party line among officers, holds more water in like NATO's "worst-of-occupy" LoserVilles maybe). DiNucci was jokingly accusing Nirel at Wanderers yesterday of getting her friend Julia psyched about Paris, the latter being a valued member of his humanist circle. Also it sounds like Bader (who also knows Alex, part of this other circle) is off to Germany for a spell. Scene changes everywhere. DiNucci is fine tuning his book, almost finished. He's caring for an elder so isn't traveling much himself. I've connected Koski's recent studies back to Martian Math on Synergeo, which subject I'm slated to teach again this summer, for Saturday Academy. Testing Math ML This formula by Ramanujan is being rendered by MathJax. The equation was derived from the handwriting-to-MathML utility, Web Equation, and then hand edited a bit. This formula served as a basis for our Python Pi Day contest last year, at OST. 1 π = 8 9801 ∑ n = 0 ∞ 4 n ! n ! 4 26390 n + 1103 39 6 4 n Right click on the equation and choose Show Source to look at the MathML. In LaTex (I didn't need to edit this one): $$ \dfrac {1} {\pi }=\dfrac {\sqrt {8}} {9801}\sum _{n=0}^{\infty }\dfrac {\left( 4n\right) !} {\left( n!\right) ^{4}}\left[ \dfrac {26390n+1103} {396^{4n}}\right] $$ Ramanujan's crazy-making identities get mentioned by me a few times in this debate thread on math-teach. If you're not seeing equations for one-over-pi, click here for a picture of this blog post to see what you're missing -- provided Flickr still exists. Glenn suggested the family condo homeowner's association might sue the linoleum company, over all that asbestos, which everyone has in their bathrooms. Property values just dipped. The banks should adjust their mortgages downward accordingly, like finding out there's a sink hole, like in Guatemala City, but the banks never do. They'd rather we not blab with each other about property deficiencies, but in fact they can't stop us. Speaking of which, the ceiling is still slated to go, just haven't figured out if we're going two-story. This isn't the condo I'm talking about, but the Blue Tent (really a wood frame structure with lathe and plaster walls, wood siding), which has an amateur's 2nd floor deck, some pet project of former owners we'll never know. We bought a neighborhood hand-me-down built in 1905 and felt lucky. Yep, always lucky to be in America, no matter how they treat ya (spam up the wazoo, full body scans, pee checks, rigged elections... hardly what we signed up for as kids, so blame the terrorists right?). Anyway, I'm ranting. The bookkeeping pooter is still in eternal reboot mode. That's not the end of the world but I want what's on those drives. First step is to bust the dust bunnies and see if she recovers. Before that though, I'm hooking up the Toshiba to the printer that only works with the other Toshiba that just up and died the other day, while we were watching. No kidding. Tara adeptly switched to the Ubuntu laptop and upgraded the heck out of it, but we're still down a machine and don't want to get Win7 when they're about to roll out Win8. By the way, this LG phone they strong-armed me into getting, said use it or lose it on the credit, is the worst phone ever. Tries to sell apps, freezes, just doesn't get it in general. I'll get more specific with the model number when I get the time. I'll not blame Verizon this time as they can't know some of their models from reputable companies are just plain junk really. Who has the time to test them all? Not the government certainly, oh no. I'm back on Synergeo even after the big fight, which left a lot of us flocking to a different group (a Google one, no reflection on Yahoo! in terms of what we were fighting about). A similar farce brought SWM back on board in Wittrs-Plus/Ex, Sean's station. He narrated some of the haps on Analytic, the fighting there. I was happy for the synopsis as I don't subscribe to Analytic nor really have the time. Sean's station has been great though. I've been posting about this fictive BBC broadcast they could actually pick up on if they wanted, based on a famous (if somewhat nefarious) book about the great master (the quintessential late millennium philosopher). The Europeans seem to be getting all goofy given they can't figure out their finances. Anything for a welcome distraction, like saber rattle at Iran. Talk about a dysfunctional family. I'm glad their footprint is confined to Washington DC in a lot of ways, a kind of containment. North Americans are free to go about their business without having to fixate on what Euros are thinking. We'll catch up on Youtube later. In the meantime, I've been watching the Occupy Chile movement and understand they blame vouchers for some of their problems. In a lot of ways, it's Chicago that's no longer obeyed, when it comes to macro-economics, but that back had to break further north first probably (talking neocons, remember them?). "Allende couldn't hack it but Obama could" or something like that? -- too early to hatch a full blown narrative. Anyway maybe Obama is for vouchers I can't remember -- time to tune in the elections a little more. Once the Republicans snubbed the Governor of Louisiana by disallowing him time in the TV circus, I knew I'd made the right decision in killing my TV. Dumbs ya down really bad, clinically. Chomsky is right, Nader too. Geniuses protect themselves better, develop antibodies. If it weren't for the NFL (no, not talking football, duh) I don't think as many would survive public school, that's for sure.
CommonCrawl
BMC Biomedical Engineering Iterative Bayesian denoising based on variance stabilization using Contourlet Transform with Sharp Frequency Localization: application to EFTEM images Soumia Sid Ahmed1, Zoubeida Messali1 na1, Larbi Boubchir ORCID: orcid.org/0000-0002-5668-68012, Ahmed Bouridane3, Sergio Marco4 & Cédric Messaoudi4 BMC Biomedical Engineering volume 1, Article number: 13 (2019) Cite this article Due to the presence of high noise level in tomographic series of energy filtered transmission electron microscopy (EFTEM) images, alignment and 3D reconstruction steps become so difficult. To improve the alignment process which will in turn allow a more accurate and better three dimensional tomography reconstructions, a preprocessing step should be applied to the EFTEM data series. Experiments with real EFTEM data series at low SNR, show the feasibility and the accuracy of the proposed denoising approach being competitive with the best existing methods for Poisson image denoising. The effectiveness of the proposed denoising approach is thanks to the use of a nonparametric Bayesian estimation in the Contourlet Transform with Sharp Frequency Localization Domain (CTSD) and variance stabilizing transformation (VST). Furthermore, the optimal inverse Anscome transformation to obtain the final estimate of the denoised images, has allowed an accurate tomography reconstruction. The proposed approach provides qualitative information on the 3D distribution of individual chemical elements on the considered sample. Backround Transmission Electron Tomography (TET) is one of the most widely used methods for structural analysis in biology and is capable to reveal subcellular structures at the nanometric scale. The combination of TET with chemical mapping (such energy filter transmission electron microscopy: EFTEM) gives qualitative information on the distribution of the chemical elements by the generation of 3D chemical maps in the analyzed samples [1] thus overcoming the limitation of 2D maps. In an EFTEM mode, the transmitted electrons lose different energies according to their interaction with the atoms present in the sample. These energies are characteristic of each type of interaction where electron magnetic fields can be used to separate these electrons. Thus, it is possible to construct a filtered image using only those electrons having lost a precise energy. This approach allows for the computation of elemental maps as images calculated after removing the unspecific signals. The inherent presence of low signal-to-noise ratio (SNR) in biological specimens when an EFTEM is performed, remains a major issue to generate high resolution and good quality EFTEM-3D maps. thus limiting the use of 3D chemical mapping in biology. This paper aims to improve the quality of the acquired images by applying denoising approaches respecting the physical significance of the pixel values of EFTEM maps (which represent the number of electrons having lost a characteristic energy) to produce 3D chemical maps of very high quality of the sample to be analyzed. There is much interest in developing novel methods to remove the noise in its different forms from images in such a way that the original image is discernible and the signal quality is not modified. However, existing image-enhancement methods amplify noise when they amplify weak edges since they cannot distinguish noise from weak edges [2, 3]. Here, we extend our preliminary work, by considering more general optimal inverses for the Anscombe transformation in an iterative process. on the other hand, it has been shown that there are two types of noise in electron microscopy [4, 5]. The first one comes from the sensor such as the CCD camera, while the second comes from the inelastic interactions of the electrons beam with the specimen. The noise from the camera is dominant and is modeled as a Poisson process. Therefore, we have assumed that the EFTEM images are corrupted by additive Poisson noise. Therefore, EFTEM images are denoised using a Bayesian denoiser in the Contourlet Transform with Sharp Frequency Localization (CTSD) [6] domain iteratively in order to improve progressively the effectiveness of the Anscombe transformation (i.e. variance stabilizing transformation VST) [7, 8]. Furthermore, we demonstrate that the assumption of a Poisson noise with a combination of a Bayesian denoiser in CTSD domain and the Anscombe transformation allow for a significant enhancement of the chemical map computation which in turn will enhance the 3D reconstructed volume of EFTEM images with a computational cost at worst twice that of our previous non-iterative Bayesian denoiser [9]. We demonstrate through experiments with real EFTEM images contaminated by additive Poisson noise that the performance of the proposed method substantially surpasses that of previously published methods. The proposed method is qualitatively evaluated in an observer study to assess the improvement of 3-D visualizations of EFTEM series and quantitatively in terms of SNR. This paper is organized as follows: "Results" section defines the evaluation criteria considered and the computed maps including a comparative analysis of the performance of the proposed denoising method in this study with previously published denoising methods [3, 9–12] on different real data sets. Furthermore, numerical experiments in this section are presented to demonstrate the effectiveness of the proposed method over recent denoising approaches. "Discussion" section discusses the performance and effectiveness of the proposed method. Concluding remarks are given in "Conclusion" section. Finally, "Methods" section describes first the EFTEM images used in this work and which are a specific data collected at different energies 650, 680 and 710 eV from a biological sample, namely Fonsecaea pedrosoi. It also describes the propose iterative denoising method for the purpose to perform chemical maps computation and therefore to enhance the quality of the 3D reconstructed volume of the EFTEM images. In order to assess the performance of the proposed method described in "Methods" section, a quantitative evaluation has been carried out against our previously published denoising approach[9] including recent denoising methods. For the sake of comparison, we have only chosen denoising methods using the same Bayesian denoiser with the scale-mixture approximation to the alpha-stable prior, called " α-stable mixture" in different domains. The three domains that we have considered are the Wavelet transform [13], the Contourlet transform [14] and the CTSD domains, respectively, as shown in the workflow at the end of this paper (Fig. 5). Knowing that the bloc of Hot spot in the workflow represents a pre-processing of removing the aberrant pixels from the EFTEM images using the ImageJ plugin EFTEM-TomoJ [1, 15]. The EFTEM-TomoJ and TomoJ blocs are the plugins under ImageJ used to compute the elemental map and the 3D tomography reconstruction of our tilt series respectively. Since the aim of this study is to enhance the quality of the reconstructed volume of the sample, we have not assessed our proposed method on the 3D volumes for evaluating its effectiveness before and after doing the reconstruction. In addition to the visual quality of the 3D volumes, we have used two evaluations criteria: the SNR and the weber contrast (CW) [16] of the iron aggregates present at the cell wall (signal) in the 3D volume using the resin area as the background. Both the SNR and CW are calculated using the projections from the central plans (20 to 38) which contains the aggregates in the reconstructed volume. Figure 1 shows the central plan of the reconstructed volume and the different areas before denoising. Central plane (number 31 of sections 0 to 63) of the reconstructed volume before denoising. The gray levels are voxels proportional directly to the quantity of the present iron. a cytoplasm, b resin, c iron aggregate area d iron aggregates on the cell wall which are considered as the useful signal and is used to evaluate the different algorithms The SNR was calculated in decibels using the following equation: $$ SNR_{wall}=10\log_{10}\left[ \left(\frac{\overline{W}-\overline{R}}{\sigma_{resin}}\right)^{2} \right] $$ where \(\overline {W}\) and \(\overline {R}\) are the average values of the amplitude of the net wall signal and the resin, respectively, αresin is the standard deviation of the resin. In order to calculate the weber contrast in the wall area, we used the following formula: $$ C_{W}=\left[\frac{\overline{W}-\overline{R}}{\overline{R}}\right] $$ where CW is the contrast in the wall area, \(\overline {W}\) and \(\overline {R}\) are the mean values of the pixels in the wall and resin zones, respectively. The detection of the iron aggregate is an important task for further following biologic process. The texture of the different regions in the EFTEM image isn't considered in this work. Figure 2 shows the visual results of the central plan and the eighteen projections of the reconstructed volume using the estimated images for each denoising method. One can clearly see that the visual quality of the proposed iterative Bayesian denoiser in the CTSD domain with the VST for the Poisson noise outperforms the considered denoising methods. By combining the noisy observation with a previously obtained estimate of the noise free data, our denoiser overcomes the limitations of our original Bayesian denoiser in [9]. The zooming on a textured area of the sample proves not only that our denoiser ensures a good compromise between the noise rejection and the conservation of the finer details in the image, but also there are some details that were hidden due to the noise but after the denoising they became visually clear as shown in Fig. 3. Results of the iterative denoising process on images. The charts correspond to profiles obtained from the lines drawn in each image Visual comparisons of the reconstructed volumes, the images are the projection of 18 central plan images and the central image of the 3D volume To demonstrate that the proposed denoising process maintain the contents of the original images, we plot the profile of the images before and after the proposed denoising process, as we did in our previous work [17], using ImageJ 1.48v. In Fig. 4, we plot a 26-pixel integrated intensity profile along the region of interest ROI 'iron aggregate area' on both original noisy images and denoised images. We clearly observe that the contents of the denoised images are not affected. Visual comparison of the projections of the EFTEM images using the iterative Bayesian denoising in the CTSD domain with VST for the Poisson noise and the Bayesian estimator in the CTSD domain [9]. The image was zoomed on a textured area of the Fonsecaea pedrosoi, where the yellow arrows indicate the iron aggregates on the cell wall Figure 5 summarizes all the methods that we have used in this study where (A) is the reference. We reconstructed the 3D volume of the original images (i.e. without denoising) to compare its quality with the quality of those with denoising. The outputs of (B), (C), (D) and (E) are the tilt series denoised using the Bayesian denoiser in the wavelet, the contourlet SD, the contourlet SD in iterative way and in the contourlet domains, respectively. The bloc of Hot spot in the workflow represents a pre-processing of removing the aberrant pixels from the EFTEM images using the ImageJ plugin EFTEMTomoJ [1, 14]. This step is applied before and after the denoising step to make the alignement process during the reconstruction easier. The EFTEM-TomoJ and TomoJ blocs are the plugins under ImageJ used to compute the elemental map using the 3-window technique which requires three energy-filtered images and the 3D tomography reconstruction of our tilt series, respectively. To measure performance improvement, we have calculated the SNR (Table 1) and the weber contrast CW using the reconstructed volumes before and after denoising of the whole database (228 images) for each denoising method, which means 912 images total. After analysing the results, one can see that the SNR and the CW are enhanced in all the applied methods and the Bayesian estimator in the wavelet and the contourlet transform domains is comparable to the Bayesian estimator in the CTSD domain. One can also notice that the proposed iterative denoiser outperforms the previous methods, especially our previous work [9] and gives much better results in terms of both SNR and CW, where the SNR is enhanced by about 11 dB compared to the Bayesian estimator in the CTSD domain [9]. The main reason is that the iterative combination with a previous estimate refines the stabilization and helps to tackle the problem of the low SNR for this type of images. These findings suggest that the proposed iterative Bayesian denoising in the CTSD domain with VST is an accurate method adapted to capture the fine details that are hidden because of the Poisson noise. Table 1 SNR and contrast CW of the wall area We should note, that the accurate and judicious assumption of the Poisson distribution instead of the Gaussian one to model the additive noise in the observation data EFTEM, helped to improve the considered Bayesian estimators. After analysing the results, one can see that the SNR and the CW are enhanced in all the applied methods and the Bayesian estimator in the wavelet and the contourlet transform domains is comparable to the Bayesian estimator in the CTSD domain. One can also notice that the proposed iterative denoiser outperforms the previous methods, especially our previous work [9] and gives much better results in terms of both SNR and CW, where the SNR is enhanced by about 11 dB compared to the Bayesian estimator in the CTSD domain [9]. The main reason is that the iterative combination with a previous estimate refines the stabilization and helps to tackle the problem of the low SNR for this type of images. These findings suggest that the proposed iterative Bayesian denoising in the CTSD domain with VST is an accurate method adapted to capture the fine details that are hidden because of the Poisson noise. We should note, that the accurate and judicious assumption of the Poisson distribution instead of the Gaussian one to model the additive noise in the observation data EFTEM, helped to improve the considered Bayesian estimators. This paper has proposed a novel iterative method based on a nonparametric Bayesian estimator in CTSD domain with VST which is capable to denoise EFTEM images. The iterative combination with a previous estimate (denoised image) refines the stabilization which leads to a better quality of the images in terms of a higher SNR and contrast which in turn enhances the 3D tomographic reconstruction. In order to illustrate the potential of the proposed denoising method and analyze the importance of embedding the VST framework within the iterations, we have compared our results using simplified version of the developed algorithm (without iteration and without VST) in different domains with the proposed denoising algorithm. After applying the non iterative Bayesian estimator in the different domains, we have obtained good results where the SNR is considerably enhanced. To further address the problems associated with missing details in the denoised images, we have refined our previous method by taking into account the geometrical information of the images (i.e. contours). Therefore, we have applied iteratively the Bayesian denoiser in the CTSD domain where we have used the Anscombe transform to normalize the image noise. Then denoising the EFTEM images with a nonlinear nonparametric Bayesian estimator is performed to reconstruct the images to their original range via an optimal inverse transformation. This algorithm gave us better results as shown in Fig. 2, where details hidden after previous denoising approach, are now preserved, as shown in Fig. 3. Our future will focus on studying other nonparametric Bayesian estimators, in particular, the estimator based on Bessel-K-form (BKF) density [18–20]. Nature of data The denoising methods were applied on experimental data collected from a biological sample (Fonsecaea pedrosoi). These experimental data consist of EFTEM tomographic tilt series acquired using a Saxton scheme from −60∘ to 60∘ with TEMography Software from JEOL Ltd (interested readers are referred to [1]. In our case, we have used three series of different energies 650, 680 (corresponding to pre-edges representing the background of the chemical element Fe) and 710 eV (corresponding to the Fe L2 peak representing the characteristic iron signal) with an energy window of 20 eV; each one containing 76 gray-scale images of size 512 ×512 pixels each. Figure 6 shows three examples of images number 1, 32 and 76 from each series at different energies (650, 680 and 710 eV) and three angles (−60∘,0∘ and 60∘). Three principal image areas are considered in quantitative assessments, namely: (a) cytoplasm, (b) resin, (c) iron aggregate area and iron aggregates on the cell wall. The yellow circles in the 710 eV images corresponds to iron aggregates, which are considered as the useful signal and are used to evaluate the different algorithms. Example images (tilt angles of −60∘,0∘ and 60∘) for tomographic EFTEM series acquired at 650, 680 and 710 eV Proposed denoiser This paper proposes to denoise the EFTEM images using an iterative way. Our inputs are EFTEM images affected by an additive Poisson noise imaged at different energies. The histogram of the noisy images is positively skewed as shown in Fig. 7. To denoise them, we apply a VST approach to standardize the image noise as the first step. Then, we calculate the standard deviation (STD) of different regions in the same image as shown in Fig. 8, in order to confirm that it is not stable as it should be in the case of a Poisson noise. This explains why we need firstly to apply the VST to standardize the image noise. Then, we denoise the images by considering them like they areas being contaminated with an additive white Gaussian noise (AWGN). The iterative proposed algorithm is based on a nonparametric denoising method in the CTSD domain. Once obtaining After getting the denoised images, we apply the optimal inverse of the VST using ; in our case we used the most common one for this purpose which is the Anscombe transformation (AT) [7]. Histogram of EFTEM image (The EFTEM image number 06 of the 650eV tilt series with its corresponding histogram) Standard deviation values (STD) in different regions of an EFTEM image The Anscombe transform converts a Poisson noise to Gaussian noise with variance 1 [7] so, from a mathematical viewpoint, our model is $$ y=x+\varepsilon $$ where y and x are respectively the noisy EFTEM image and the original clean image to recover, ε is an additive Gaussian noise. Basic assumption Our input is a noisy EFTEM image y composed of pixels y(m,n), modeled as an independent realization of a Poisson process with parameter x(m,n)≥0: $$ \begin{aligned} y&(m,n) \sim P(y(m,n)|x(m,n))\\ =&\left\{\begin{array}{ll}\frac{x(m,n)^{y(m,n)}e^{-x(m,n)} }{y(m,n)!} & \quad y \in N \cup\{0\}\\ 0 & \quad elsewhere\\ \end{array}\right. \end{aligned} $$ knowing that the mean and variance of y coincide and are equal to x: $$ \mathbb{E}\left\{y|x\right\}=\text{var}\left\{y|x\right\}=x $$ Proposed iterative algorithm Our goal is to homogenize the noise variance in all image regions. Therefore, we first apply the Anscombe forward transformation to each image. This transformation step normalizes the image noise [21, 22] and yields an image a(y): $$ a(y)=y_{AT}=2\sqrt{y+3/8} $$ The observations a(y) can be treated as corrupted by AWGN with homogeneous variance. After applying AT, we apply a Bayesian denoiser in the CTSD domain (BDCTSD), proposed in our previous work [9] to enhance the observed images in terms of visual quality, contrast and SNR. For the sake of clarity, we first describe the Bayesian denoiser in this section. The transformed observed image is represented in the contourlet-SD domain by: $$ CTSD_{k}(a)=s_{k} + \epsilon_{k} $$ where CTSDk(a),sk and εk are the contourlet coefficients in the kth directional subband of the observed noisy image, noise-free image and noise respectively. Because the contourlet has the similar characteristics as the wavelet, so we can straightforwardly extended the Bayesian denoiser proposed in the wavelet domain [11, 12], into the contourlet domain. In our study, similarly to the wavelet domain, the applied Bayesian denoiser in the contourlet domain is based on adapting a prior statistical model for sk and then imposes it on the contourlet coefficients to describe their distribution. In the other hand, it has been shown that the statistical behavior of contourlet coefficients is successfully modeled by families of heavy-tailed distributions such as the α-stable. More precisely, Sadreazami et al. [23] demonstrated through the plots of histograms and the computation of kurtosis of the contourlet coefficients that symmetric α-stable family, is more appropriate distribution for modeling the contourlet coefficients of natural images than families with exponential tails such as the generalized Gaussian. In view of this, we propose to use the α-stable prior with the scale mixture approximation, called " α-stable mixture" to model the contourlet subband coefficients [9]. The denoised contourlet coefficients of the image are then estimated by the L2-based Bayes rules, which correspond to posterior conditional mean (PCM) estimate as shown in our previous work [9]. The inverse contourlet transform is computed through the processed contourlet coefficients to get the denoised image). The Bayesian denoiser BDCTSD, is viewed as an efficient filter for AWGN. If denoising is ideal, we have: $$ BD_{CTSD}(y_{AT})=BD_{CTSD}(a(y))=\mathbb{E}[y_{AT}|{y}] $$ The so-called exact unbiased inverse of a [7] $$ I_{a}^{p}: \mathbb{E}\left[a(y)| x\right]\mapsto \mathbb{E}\left[y| x\right]=x $$ is used to generate the denoised image to the original range of y, thus yielding an estimate of x: $$ \widehat{x}=I_{a}^{p}(BD_{CTSD}(y_{AT})) $$ where BDCTSD denotes the Bayesian denoiser in the CTSD proposed in [9]. The main steps of the proposed denoising algorithm are as follows: Step 1: Normalize the variance noise of the observed EFTEM data by applying the VST to each image of the three tilt series. This step produces an EFTEM data set such that each image yAT like it is contaminated with AWGN. Step 2: Apply the Bayesian denoiser in the CTSD domain (BDCTSD) [9] to the transformed noisy data. The (BDCTSD) consists on: (a) calculate the CTSD coefficients of the yAT, (b) denoise the detail coefficients of the CTSD at each scale and each orientation, (c) reconstruct the denoised image by applying the inverse CTSD to the estimated coefficients. This is done for each image separately. We should recall that for the Bayesian denoiser in the contourlet transform and the contourlet-SD, we selected the number of levels for the Directional Filter Bank (DFB) at each pyramidal level equal to (2, 3, 4, 5) pkva filters and we did not downsample the low-pass subband at the first level of decomposition, based on [6]. Step 3: Apply the optimal inverse AT to generate the denoised image to the original range of y. Figure 9 resumes the steps of the proposed denoising algorithm. Flowchart of the Bayesian Denoiser in the CTSD domain with Variance Stabilization using Anscombe transform In order to enhance the performance of our proposed denoiser, we follow the same steps as in the paper of Lucio Azzari and Alessandro Foi [7]. We use an iterative algorithm based on convex combination of \(\widehat {x}_{i-1}\) and y: $$ \overline{y}_{i}=\lambda_{i}y+(1-\lambda_{i})\widehat{x}_{i-1} $$ where 0<λ≤1 and \(\widehat {x}_{i}\) is the estimate of \(\widehat {x}\) at iteration i. λ depends on the number of iterations K and λK and is defined as \(\lambda _{i} = 1 - \frac {i-1}{K-1}(1-\lambda _{K})\) where the parameters K, λK are adaptively selected based on the quantiles of y [7]. In the experimental study, all our results have K≤4, because there isn't a significant enhancement of the results in terms of SNR neither of CW by increasing the number of iterations. Furthermore, the running time of the proposed algorithm increases. We use \(\widehat {x}_{i-1}\) instead of the previous \(\widehat {x}_{i}\), at each iteration of the algorithm. We apply the Anscombe transformation to image \(\overline {y}_{i}\), yielding \(f_{i}=a(\widehat {y}_{i})=\widehat {y}_{AT_{i}}\). Then we perform a Bayesian denoising process BDCTSD to obtain a denoised image \(D_{i}= BD_{CTSD}[a(\overline {y}_{i})]\). After getting Di, we return it to its original range by applying the exact unbiased inverse of fi [24]: We transform the image \(\overline {y}_{i}\) to the CTSD domain after applying the Anscombe transform, $$ \widehat{x}_{i}=I_{f_{i}}^{\lambda_{i}}(D_{i}) $$ As in [7], we do the convex combination with a linear binning which can be especially beneficial at the first iterations. $$ \begin{aligned} \widehat{x}_{i}= B_{h_{i}}^{-1} \hspace{85pt} \\ \left[I_{f_{i}}^{\lambda_{i}}\left(BD_{CTSD}\left[f_{i}\left(B_{h_{i}}[{\lambda_{i}}{\times}y+(1-{\lambda_{i}}{\times}\widehat{x}_{i-1}) ]\right) \right]\right.\right] \end{aligned} $$ \(B_{h_{i}}\) is the binning operator and hi is the size of the small block at ith iteration (i.e. bin hi×hi). This operator can be applied to \(\overline {y}_{i}\), yielding a smaller image where each bin of hi×hi pixels from \(\overline {y}_{i}\) represents a single pixel equal to their sum. Note that \(B_{h_{i}}[\overline {y}_{i}]\) is subject to the same conditional probability of \(\overline {y}_{i}\) which means that the adoption of binning does not interfere with the VST [7], neither with BDCTSD [9]. \(B_{h_{i}}^{-1}\) is the inverse binning operator. The entire denoising algorithm is summarized in Fig. 10. Flowchart of the Bayesian Denoiser with Variance Stabilization using Anscombe Transform AT: Anscombe transformation AWGN: Additive white Gaussian noise CTSD: Transform with sharp frequency localization DFB: Directional filter bank EFTEM: Energy filter transmission electron microscopy SNR: Signal-to-noise ratio STD: TET: Transmission electron tomography VST: Variance stabilizing transformation Messaoudi C, Aschman N, Cunha M, Oikawa T, Sorzano CO, Marco S. Three-dimensional chemical mapping by eftem-tomoj including improvement of snr by pca and art reconstruction of volume by noise suppression. Microsc Microanal. 2013; 19(6):1669–77. Cunha ALD, Zhou J, Do MN. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process. 2006; 15(10):3089–101. Sid-Ahmed S, Messali Z, Ouahabi A, Trépout S, Messaoudi C, Marco S. Bilateral filtering and wavelets based image denoising: Application to electron microscopy images with low electron dose. Int J Recent Trends Eng Technol. 2014; 11(1):153–64. Zuo JM. Electron detection characteristics of a slow-scan ccd camera, imaging plates and film, and electron image restoration. Microsc Res Tech. 2000; 49(3):245–68. Vulović M, Ravelli RB, van Vliet LJ, Koster AJ, Lazić LI, Lücken U, Rullgård H, Öktem O, Rieger B. Image formation modeling in cryo-electron microscopy. J Struct Biol. 2013; 183(1):19–32. Lu Y, Do MN. A new contourlet transform with sharp frequency localization. In: Proceedings of the 2006 IEEE International Conference on Image Processing (ICIP). Atlanta: IEEE: 2006. p. 1629–32. https://doi.org/10.1109/ICIP.2006.312657. Azzari L, Foi A. Variance stabilization for noisy+estimate combination in iterative poisson denoising. IEEE Signal Process Lett. 2016; 23(8):1086–90. Boubchir L, Al-Maadeed S, Bouridane A. Undecimated wavelet-based bayesian denoising in mixed poisson-gaussian noise with application on medical and biological images. In: The 4th International Conference on Image Processing Theory, Tools and Applications (IPTA). Paris: IEEE: 2014. p. 1–5. https://doi.org/10.1109/IPTA.2014.7001926. Sid-Ahmed S, Messali Z, Ouahabi A, Trepout S, Messaoudi C, Sergio M. Nonparametric denoising methods based on contourlet transform with sharp frequency localization: Application to low exposure time electron microscopy images. Entropy. 2015; 17(5):3461–78. Boubchir L, Fadili J. A closed-form nonparametric bayesian estimator in the wavelet domain of images using an approximate α-stable prior. Pattern Recogn Lett. 2006; 27(12):1370–82. Boubchir L, Fadili J, Bloyet D. Bayesian denoising in the wavelet-domain using an analytical approximate α-stable prior. In: The 17th International Conference on Pattern Recognition (ICPR). Cambridge: IEEE: 2004. p. 889–92. https://doi.org/10.1109/ICPR.2004.1333915. Boudjelal A, Messali Z, Boubchir L, Chetih N. Nonparametric bayesian estimation structures in the wavelet domain of multiple noisy image copies. In: The 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT). Sousse: IEEE: 2012. p. 495–501. https://doi.org/10.1109/SETIT.2012.6481962. Sid-Ahmed S, Messali Z, Ouahabi A, Trepout S, Messaoudi C, Marco S, Mohammad-Djafari A, Barbaresco F. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time. In: AIP Conference Proceedings. AIP: 2015. p. 403–13. Do MN, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process. 2005; 14(12):2091–106. Henderson R. Realizing the potential of electron cryo-microscopy. Q Rev Biophys. 2004; 37(1):3–13. Laurent G. Ecrans Plats et Vidéoprojecteurs - 2 Éd: Principes, Fonctionnement et Maintenance. In: Audio-Photo-Vidéo. Dunod: 2014. https://www.dunod.com/sciences-techniques/ecrans-plats-et-videoprojecteurs-principes-fonctionnement-et-maintenance?gclid=Cj0KCQjww47nBRDlARIsAEJ34blH9GIZX0mTHyMw3penlvJzlTz3HJk9YWu_ZtfTojrlEsVGxip7NDkaAnlMEALw_wcB. Ahmed SS, Messali Z, Poyer F, Roui LL-L, Desjardins L, Cassoux N, Thomas CD, Marco S, Lemaitre S. Iterative variance stabilizing transformation denoising of spectral domain optical coherence tomography images. Applied to Retinoblastma. Ophthalmic Res. 2018; 59(3):164–9. Fadili J, Boubchir L. Analytical form for a bayesian wavelet estimator of images using the bessel k form densities. IEEE Trans Image Process. 2005; 14(2):231–40. MathSciNet Article Google Scholar Boubchir L, Fadili J. Bayesian denoising based on the map estimation in wavelet-domain using bessel k form prior. In: International Conference on Image Processing (ICIP). Genova: IEEE: 2005. p. 113. https://doi.org/10.1109/ICIP.2005.1529700. Boubchir L, Nait-Ali A, Petit E. Multivariate statistical modeling of images in sparse multiscale transforms domain. In: The 17th IEEE International Conference on Image Processing (ICIP). Hong Kong: IEEE: 2010. p. 1877–80. https://doi.org/10.1109/ICIP.2010.5652329. Boubchir L, Boashash B. Wavelet denoising based on the map estimation using the bkf prior with application to images and eeg signals. IEEE Trans Signal Process. 2013; 61(8):1880–94. Fadili J, Starck J-L, Boubchir L. Morphological diversity and sparse image denoising. IEEE Int Conf Acoust Speech Signal Process(ICASSP). 2007; I:589–92. H Sadreazami MOA, Swamy MNS. Contourlet domain image modeling by using the alpha-stable family of distributions. In: 2014 IEEE International Symposium on Circuits and Systems (ISCAS). Melbourne VIC: IEEE: 2014. https://doi.org/10.1109/ISCAS.2014.6865378. Makitalo M, Foi A. Optimal inversion of the anscombe transformation in low-count poisson image denoising. IEEE Trans Image Process. 2011; 20(1):99–109. The authors wish to thank Sylvain Trépout for valuable discussions and suggestions concerning the biological data. The project was supported by the High Ministry of Education of the Algerian Republic and Campus France, project 33257ZB, Huber Curien PHC Tassili, with grant number 15MDU950 and by Agence Nationale de la Recherche ANR-11-BSV8-016. The authors want to acknowledge the PICT-IBiSA for providing access to chemical imaging equipment. The funding bodies had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. All data generated or analysed during this study are included in this published article and its supplementary information files. Soumia Sid Ahmed, Zoubeida Messali and Larbi Boubchir contributed equally to this work. Faculty of Science and Technology, Mohamed El Bachir El Ibrahimi University, Bordj Bou Arreridj, Algeria Soumia Sid Ahmed & Zoubeida Messali LIASD research Lab., Department of Computer Science, University of Paris 8, Saint-Denis, France Larbi Boubchir Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK Ahmed Bouridane INSERM, Institut Curie, University of Paris Saclay, Orsay, France Sergio Marco & Cédric Messaoudi Soumia Sid Ahmed Zoubeida Messali Sergio Marco Cédric Messaoudi SSA, ZM and LB initiated the contribution. SSA implemented the algorithms in Matlab code and got the quantitative results. ZM and SM performed concept experiments and workflows. S.M and C.M performed EFTEM image acquisitions. ZM, SM, LB, and AB coordinated the team. All authors contributed in drafting and reviewing the manuscript; also in analyzing, discussing and interpreting of the results. All authors read and approved the final manuscript. Correspondence to Larbi Boubchir. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Sid Ahmed, S., Messali, Z., Boubchir, L. et al. Iterative Bayesian denoising based on variance stabilization using Contourlet Transform with Sharp Frequency Localization: application to EFTEM images. BMC biomed eng 1, 13 (2019). https://doi.org/10.1186/s42490-019-0013-0 Image denoising Bayesian estimation Contourlet transform EFTEM Submission enquiries: [email protected]
CommonCrawl
Brain Informatics Spike pattern recognition by supervised classification in low dimensional embedding space Volume 3 Supplement 2 Special Issue: EEG Analysis Techniques and Applications Evangelia I. Zacharaki ORCID: orcid.org/0000-0001-8228-04371,2, Iosif Mporas1, Kyriakos Garganis3 & Vasileios Megalooikonomou1 Brain Informatics volume 3, pages 73–83 (2016)Cite this article Epileptiform discharges in interictal electroencephalography (EEG) form the mainstay of epilepsy diagnosis and localization of seizure onset. Visual analysis is rater-dependent and time consuming, especially for long-term recordings, while computerized methods can provide efficiency in reviewing long EEG recordings. This paper presents a machine learning approach for automated detection of epileptiform discharges (spikes). The proposed method first detects spike patterns by calculating similarity to a coarse shape model of a spike waveform and then refines the results by identifying subtle differences between actual spikes and false detections. Pattern classification is performed using support vector machines in a low dimensional space on which the original waveforms are embedded by locality preserving projections. The automatic detection results are compared to experts' manual annotations (101 spikes) on a whole-night sleep EEG recording. The high sensitivity (97 %) and the low false positive rate (0.1 min−1), calculated by intra-patient cross-validation, highlight the potential of the method for automated interictal EEG assessment. The detection of epileptiform discharges in interictal EEG is important for the diagnosis of epilepsy. Interictal spikes are brief (<250 ms), morphologically defined events observed in the EEGs of patients predisposed to spontaneous seizures of focal onset [1]. The spikes are generated by the synchronous discharges of a group of neurons in a region referred to as the epileptic focus [1]. The detection of spikes is difficult to accomplish due to their similarity to waves that are part of normal EEG or artifacts and the wide variability in spike morphology and background between patients [2]. Also the spike definitions are imprecise and vary among neurophysiologists who often do not mark the same events as spikes. A comprehensive review on automated spike detection methods is presented in [3], and later updated in [4], while a comparative analysis is presented by Wilson and Emerson [5], and Halford [6]. According to the review studies, the methods are classified into different categories based on the spike detection criterion, while many approaches use a combination of methods in a multi-stage framework. In more details, some methods extract distinctive attributes of the spikes, such as height and duration, mimicking the criteria used by the neurophysiologists [7] or utilize knowledge-based rules (spatial and temporal) [8, 9]. Other methods characterize the spikes in time or frequency domain and through morphological analysis decompose the EEG signal [10] or assume local stationarity of the noise and detect spikes as deviation from that stationarity by applying parametric models [11, 12]. There are methods in which a template (created by averaging expert-defined spikes) is used for matching against the extracted EEG waveforms [12]. Other studies use independent component analysis [13], apply artificial neural networks (ANNs) [3, 14], clustering techniques [15], or classification methods [16]. Despite the plethora of methods, spike assessment is often still performed visually due to increased false discovery rate of most methods. Among the methods with highest sensitivity (>0.92) and selectivity (>0.8) are the ones reported in [14, 17–20]. However, their accuracy is not easily comparable. Some methods have not been evaluated on long-term recordings but on preselected (usually by neurologists with experience in EEG reading) EEG segments of epileptiform and non-epileptiform discharges [17, 20]. Since these EEG segments have distinguishable (visually identifiable) patterns, it is expected to obtain higher accuracy than long EEG recordings, which possibly include artifacts and/or unclear EEG patterns. In [14], Gabor and Seyal applied ANNs for the identification of epileptiform transients in EEG signals. Their method is not completely automated since, prior to selection of training patterns, a user has to identify the peak of a spike or sharp wave that will be used for training, as well as the duration of the rising phase and the falling phase. Selection of training patterns was accomplished by the user after viewing a graphic display of the EEG signal. ANNs were also used in [19], after a template matching method where the user visually selects a few spikes from a set of test signals. Features of the signal were obtained by wavelet transformation and subsequently were used to train a feed-forward ANN. Context information of adjacent channels was utilized to reject artifacts. In [18], an expert system is proposed which exploits multi-channel EEG, as well as electrocardiogram (ECG), electrooculogram (EOG), and electromyogram (EMG) channels. The use of multiple sensors provides more information and helps better differentiating artifacts, e.g., due to eye motion or body movement. However, since the additional to EEG channels (e.g., ECG, EOG, EMG) are not always available or easy to acquire, our method relies only on EEG signals. We propose a methodology that increases specificity in a two-stage process incorporating pattern classification. Similarly to most pattern detection methods in signal processing, the amount of data processed is reduced by first extracting candidate waveforms based on low-level detection analysis (by feature extraction), while subsequently classification is performed to maximize specificity of the overall method [3]. Specifically, the proposed method first detects candidate spikes based on a mimetic approach, and afterwards classifies the candidate spikes by embedding the data in a low dimensional space and applying supervised classification in the embedding space. The contribution of the proposed method is that (i) it is fully automated, i.e., no user interaction or manual intervention is required, (ii) it is template-free; thus it generalizes to any morphological patterns and shapes and can easily be applied for detection of other waveforms as long as some training patterns have been defined, (iii) it applies to all stages of sleep; therefore is appropriate for sleep monitoring, and (iv) it achieves high sensitivity with low false positive rate. In the remaining part of the paper, we describe in detail the proposed methodology in Sect. 2 and report the evaluation results in Sect. 3. In Sects. 4 and 5 discussion and conclusions of this work are provided, respectively. Interictal discharges may be morphologically divided into sharp waves, spikes, spike-wave complexes, and polyspike-wave complexes [21]. The current study focuses on EEG recordings with spikes and/or sharp waves. Spikes are transients, clearly distinguishable from background activity, with pointed peak and a duration of approximately 20–70 ms, whereas sharp waves are the same as spikes, but with a duration of 70–200 ms [21]. For simplification, we will use the term 'spike' both for spikes and sharp waves throughout the rest of this paper. The proposed method first models coarsely the shape of the spike by breaking down the EEG signal around major peaks into half-waves. Thresholding of shape characteristics extracted from the half-waves, such as amplitude and duration, is applied to generate a number of candidate spike locations. Subsequently, the method classifies the candidate transients into spikes and non-spikes by learning the patterns of spikes using manifold learning, dimensionality reduction, and non-linear supervised classification. The whole pipeline of the method is illustrated in Fig. 1. The analysis involves a single time series which can be obtained by averaging the recordings of selected channels. Spike detection framework. The 1st step of the method detects spike-like waveforms by extracting the two half-waves. The half-waves are defined between the negative peak (marked with a red circle) and the two positive peaks (marked with green stars) and are characterized by the amplitude difference (A 1, A 2) and duration (D 1, D 2). The 2nd step of the method classifies the detected transients into spikes and non-spikes using machine learning techniques. (Color figure online) Figure 2 illustrates an example of recordings of randomly selected symmetric (across the midsagittal plane) electrodes with a spike annotation. We can see that the spike does not uniformly appear in all channels but is mostly evident in the channels of the right hemisphere (in this case) and mainly in F8 electrode. The individual steps of the method are explained with more details next. EEG recordings of selected channels showing a spike example (marked at the negative peak with a red arrow). (Color figure online) The raw EEG recordings are first downsampled (e.g., at 100 Hz) to reduce dimensionality and then a notch filter is applied with cut-off frequency at 50 Hz. Baseline correction is performed by calculating the mean signal in overlapping segments. This stepwise constant component is subsequently smoothed by using a moving average filter and subtracted from the original signal. Only the channel which clearly depicts the spikes is selected for further analysis. If more than one channels are informative, the average signal is calculated and used as input to the next step of the method performing pattern analysis. Since the same channels are also used by the expert for visual annotation, the results of the method can be easily assessed based on the temporal localization of manually and automatically detected spikes. Spike detection by shape analysis First a peak detection algorithm is applied to detect the primary vertex of the spike in the form of local minima. In order to reduce the number of candidate peaks, only peaks that are at least separated by 100 ms are retained, while small peaks that may occur in close proximity to larger local peaks are ignored. Then, around each detected peak, the EEG signal is extracted within a window (starting 100 ms before the primary vertex and ending 200 ms after it) defining the spike waveform. For each waveform, the two half-waves are segmented and four time-domain parameters are calculated: the amplitude difference (A 1, A 2) and the duration (D 1, D 2) of each half-wave [15]. These parameters describe the slope around the primary vertex and are calculated as amplitude difference and time interval between the primary vertex (wave minimum) and the two closest local maxima (before and after the minimum), respectively. Figure 1 shows the three peaks: the primary vertex marked with a red circle and the two closest maxima indicated with green stars. Thresholding of the four parameters is applied to distinguish candidate spikes from other artifacts. The minimum and maximum threshold values used in this study are shown in Table 1. A maximum value on amplitude is used to discard spikes due to noise or movement. We can use amplitude thresholds because baseline correction has previously been applied causing zero-centering of the local EEG average. Table 1 Threshold values for amplitude difference (A 1, A 2) and duration (D 1, D 2) of each half-wave It should be noted that the spike amplitudes differ between subjects, thus we relaxed the threshold constraints to make the method applicable also for "unseen" data and allow detection with high sensitivity. As a consequence, the specificity of this step becomes especially low; thus a subsequent step is required to reduce false detections using a more elaborate approach. Spike classification in a low dimensional space Low dimensional embedding If the raw signal (waveform around the primary vertex) is used as representation for the detected spikes, classification is deemed to fail due to the high dimensionality of the input pattern. When the number of parameters increases, the volume of the space grows so fast that the concept of similarity, distance, or nearest neighbor may not even be qualitatively meaningful, thus impeding clustering or classification. Therefore in this step of the method, the extracted candidate transients are classified either as spikes or as non-spikes by assuming that the spike and spike-like patterns reside on the same low dimensional manifold but in different regions. If this assumption holds, classification can be performed much easier after embedding the data than performed in the original high dimensional space. Thus, in this step, we first learn the low dimensional embedding using a set of spikes annotated by an expert (positive class), and a set of spike-like waves that are nonspecific sharp transients (negative class). The nonspecific sharp transients are all transients detected in the first step that were not annotated by the expert. Thus, it may be that the negative class also includes spikes that are just missed by the expert. We selected waves with spike-like patterns that satisfy the shape constraints set in the 1st step of the method as the negative class, instead of choosing random background segments, because the more similar the two classes are, the more possible it is to occupy the same manifold and thus to allow learning the separation between them. We used the locality preserving projections (LPP) [22, 23] to embed the data in a low dimensional space. LPP is a linear approximation of the nonlinear Laplacian Eigenmap [24]. It finds a transformation matrix A that maps a set of points \(x_{i} \in R^{d}\) \(\left( {i = 1, \ldots ,m} \right)\) into a set of points \(y_{i} \in R^{l}\), \(y_{i} = A^{T} x_{i}\), such that \(l \ll d\). LPP is designed to preserve local structure; thus, it is likely that a nearest neighbor search in the low dimensional space will yield similar results to that in the high dimensional space. The intrinsic dimensionality (l) of the transients is unknown but we used the maximum likelihood estimation (MLE) method to obtain an estimated value. The MLE method gives a good estimate of the unknown parameters by maximizing the likelihood of the data we observe. It is a widely used estimation method showing essential properties with increasing number of samples, such as consistency, efficiency, and asymptotic normality. In details, the LPP algorithm is as follows. Let X be the d × m matrix including the m waveforms. The samples constitute the nodes of a graph connected with edges having weights that depend on the samples' distance. If W is the m × m weighting matrix and D is a diagonal matrix whose entries are column sums of W, the eigenvectors α k and eigenvalues \(\lambda_{k} \left( {k = 0, \ldots ,l - 1} \right)\) for the following generalized eigenvector problem are computed: $$XLX^{T} \alpha_{k} = \lambda_{k} XDX^{T} \alpha_{k}$$ where L = D − W is the Laplacian matrix. The n × l transformation matrix A is formed by the l column vectors α k ordered according to the corresponding eigenvalues. The mapped data are subsequently introduced to an SVM classifier [25]. SVM is an extremely popular algorithm that captures complex relationships between the data points and finds an optimal boundary between the class outputs. A Gaussian radial basis function is used as kernel to perform non-linear classification. The C and γ parameters, controlling the misclassification penalty and kernel size, respectively, were defined as in [26]. Briefly, since the data are unbalanced and the sample size is rather small to produce balanced classes by subsampling the largest class, we used a weighted SVM and set the ratio of penalties for the two classes, C1 and C2, equal to the inverse ratio of the training class sizes. Thus, we avoided bias toward the class with the largest training size. We defined γ to be adaptive to the dimensionality l, using the equation \(\gamma = \frac{1}{{\left( {k \cdot l \cdot log\left( l \right)} \right)^{2} }}\), where k is a constant determined such that the fraction of the training samples contained in the kernel is approximately 20 %. The total pipeline including training and test phase is illustrated in Fig. 3. Training data X T and test data X new are concatenated into the matrix X which is used to learn the transformation matrix A based on LPP. In the training phase, the training data are embedded in the low dimensional space, $$Y_{\text{T}} = X_{\text{T}} A$$ and then the embedded data are used in combination with the corresponding class labels to learn an SVM classification model. Similarly in the test phase, the test data are embedded in the low dimensional space, $$Y_{\text{new}} = X_{\text{new}} A$$ and the embedded data are subsequently classified into spikes or non-spikes based on the learnt classification model. Since LPP supports exact out-of-sample extension, the matrix A could also be learnt by using the training data alone and then be applied on any new data set. Training and testing phase of embedding and classification Assessment is performed by examining the temporal coincidence of the manually (by the expert) and automatically (by the system) detected spikes [6]. The maximum time interval between an automatically detected peak and the closest marker (detection latency), that allows a detection to be characterized as true positive (TP), is selected equal to 50 ms. A spike detected by the system with higher latency is characterized as false positive (FP), whereas the absence of a detection within the same time interval around a marker is a false negative (FN). Sensitivity is the percentage of correct detections by the system in positive events marked by the rater (TP + FN). Selectivity or precision is the percentage of correct detections by the system in positive detections (TP + FP) [6]. FP/min is the number of false positive spikes per minute of recording. A single measure of accuracy is the \(F{\hbox{-}}score\) which expresses the harmonic mean of precision and sensitivity: $$F{\hbox{-}}score = \frac{2TP}{2TP + FP + FN}$$ The assessment of the method refers to both steps of the method and is performed by ten-fold cross-validation in order to exploit all available data. Since the 1st step of the method is rule based (unsupervised), the cross-validation is performed only on the 2nd step including both dimensionality reduction and classification in each fold. EEG dataset The EEG recordings were provided by the Epilepsy Monitoring Unit, St. Luke's Hospital, Thessaloniki, Greece. The data used in this work were acquired during a whole-night sleep EEG of a subject with history of right lobe epilepsy of fronto-temporal origin. Nocturnal sleep was recorded using multi-channel electrodes positioned according to the extended international 10–20 system on an electrode cap with sampling frequency 500 Hz. The spikes were visually identified by an experienced neurophysiologist as transients clearly distinguished from background activity with pointed peaks. The markers were manually placed (in T4 and F8 channels) at the peak of the negative phase, but imprecise markings were later corrected by automatically shifting them to the largest negative peak within a predefined neighborhood (equal to the defined detection latency) around the original marking. The method was assessed on 9-h recordings including 101 marked spikes. A total number of 4708 candidate spikes (99 TP and 4609 FP) were automatically detected during the 1st step of the method. Only two spikes were missed (not detected). Examples of data containing TP and FP waveforms, identified in this step of the method, are illustrated in Fig. 4. All TP and FP transients detected in the 1st step of the method are summarized in Fig. 5 in the form of probability maps, and are also averaged to highlight shape differences between TP and FP. It is evident that, on the average, the epileptic spikes follow a more distinguishable smooth pattern than the nonspecific sharp transients. Moreover, since the FP transients are many and also exhibit large (per point) variation, the mean values do not overlap with the most frequent values. Illustration example of 3 correct (left) and 3 false (right) spike detections for the selected channels (1st channel: T4, 2nd channel: F8). The same amplitude scale (±50 μV) has been used for all plots. The spike location is approximately at the center of each plot and indicated with a white line Average TP wave (left) and FP wave (right) (in yellow) overlaid on the corresponding probability maps obtained from all candidate transients detected in the 1st step of the method. The blue line indicates the zero level. The time window is 300 ms (100 ms before the primary vertex and 200 ms after it). (Color figure online) The 2nd step of the method was assessed by ten-fold cross-validation on the data. The classification of waveforms identified 156 (out of the 4708) as spikes with 98 of them being TP and 58 being FP. Thus, the total sensitivity of the method is 0.97 (=98/101), the selectivity is 0.63, and the number of FP per minute is 0.1. The method's performance is shown in Tables 2 and 3 and is compared against other approaches reviewed by Wilson and Emerson in [5], and by Halford [6]. Only methods for which both sensitivity and FP rate were reported are included for comparison in Table 2, whereas the rest of the methods for which both sensitivity and selectivity were reported are shown in Table 3. Studies using intracranial EEG were excluded. For some methods, more than one set of results are reported corresponding to different algorithms or parameters. If different training and testing datasets were used, this is indicated by two numbers separated by '/.' Although a direct comparison is not feasible due to the different data per study, it can be seen that our method performs better than all (16) reviewed methods in Table 2, whereas it has the highest sensitivity and the 4th (out of 14) lowest selectivity among the methods reviewed in Table 3. Table 2 Comparison of methods detecting epileptic activity based on FP rate Table 3 Comparison of methods detecting epileptic activity based on selectivity Furthermore, a recent method detecting interictal epileptiform discharges based on the merger of increasing and decreasing sequences and SVM classification [16] achieved average detection sensitivity ~0.96 and specificity in classification more than 0.98 in 20 min light sleep data from ten patients' EEG recordings. Since our detection results are not intercomparable with the classification results of [16], we use for comparison the classification performance only of the 2nd step, which detected 98 (out of 99) spikes and 4551 (out of 4609) non-spikes, and thus achieved sensitivity and specificity both equal to 0.99. Detection can also be achieved through classification of all possible patterns in EEG, such as epileptiform transients (single and multiple spikes or spike-and-slow-wave complexes) and non-epileptiform transients (eye movements and artifacts), as performed in [17]. In such systems, the methods are evaluated on preselected EEG segments from each pattern category, thus the detection performance cannot directly be assessed and compared with our method in which the total recordings are used as input. In order to assess the contribution of the selected dimensionality reduction technique, the LPP method has been replaced by other dimensionality reduction techniques [23]. The results of the best performing techniques (achieving \(F{\hbox{-}}score > 0.6\)) are shown sorted in Table 4. The Linear Local Tangent Space Alignment (LLTSA) [27] performs better in respect to \(F{\hbox{-}}score\) but we chose LPP due to its highest sensitivity which is more important given the small value of FP/min. The neighborhood preserving embedding (NPE) [28], principal component analysis (PCA) [29], maximally collapsing metric learning (MCML) [30], stochastic proximity embedding (SPE) [31] and diffusion maps [32] also have high sensitivity with increased however FP/min. The competitive performance of the method for more than one dimensionality reduction techniques indicates the robustness of the framework to the selection of dimensionality reduction technique. Table 4 Results of the best performing dimensionality reduction techniques based on F-score The original dimensionality of the waveforms used for classification was d = 31 (corresponding to 301 ms at 100 Hz), whereas the reduced dimensionality estimated by the MLE method was l = 14. The \(F{\hbox{-}}score\) as a function of dimensionality is shown in Fig. 6 for the three best dimensionality reduction techniques (LLTSA, LPP, NPE). It can be seen that LPP is not very sensitive to the selection of l and achieves higher \(F{\hbox{-}}score\) for more values of l than LLTSA; thus, it is preferred as technique in this application. The F-score as a function of dimensionality for the best 3 dimensionality reduction techniques LLTSA, LPP, NPE (from top to bottom) The method was developed in Matlab. The total computational time (including the ten-fold cross-validation scheme) in a Windows machine, Intel Core™2 Duo CPU 2.2 GHz, was approximately 4 min for the applied dataset, but depends highly on the number of candidate spikes extracted in the 1st step of the method. This paper presents a system that combines a rule-based approach with machine learning for detecting interictal discharges in EEG. After the extraction of a large set of candidate spikes based on a crude shape model consisting of two half-waves, more detailed modeling of the spike waveform is performed in order to discover the non-linear structure of the data and map it to a lower dimensional space. Dimensionality reduction is performed by locality preserving projections. The main advantage of LPP is its linearity and more importantly that it is defined everywhere in ambient space rather than just on the training data points. Thus, LPP may be simply applied to any new data point to locate it in the reduced representation space. Also, LPP is derived by preserving local information; hence, it is less sensitive to outliers than PCA. The method achieved high sensitivity with low false positive rate and outperformed the majority of the other approaches used for comparison. However, it should be noted that (i) the comparative data shown in Tables 2 and 3 (except for the proposed method) are extracted from the literature and therefore should be compared with care and (ii) the intra-subject assessment of the proposed method could affect the performance, since the recorded signals collected from different individuals can exhibit large differences, especially if there is great age difference. The classification of events usually relies not only on the epileptic spikes themselves, but also on other contextual information such as spatial information (same event in other channels) and temporal information (time shift of the event). We did not make use of spatial and/or temporal context in the system. We also did not use contextual information on the surrounding background EEG. The additional spatial and temporal cues do not seem to be very important in this intra-subject analysis of single spikes where validation is performed on the channels used during visual annotation. The development of a single-channel method is preferred because it does not necessitate multi-channel recordings. If we want to detect spikes on new patients with no prior information (unknown seizure origin and unavailable individualized annotations), we can apply the method on each channel independently and then apply a basic spatiotemporal fusion rule. Such a rule could impose spatial and temporal constraints on the per-channel detections in order to differentiate between TP and FP. As an example, a spike should appear in at least 2 neighboring channels within 20 ms distance between the detections. Such rules are common in EEG waveforms detections [18, 33]; however, they require a larger number of annotated recordings (than currently available to us) for testing their generalization ability. The aim of this study was to achieve high sensitivity minimizing missed events, even at the expense of reduced specificity, because the detected events can later be checked by a neurophysiologist. Our aim was the reduction of the time needed to analyze long sleep recordings through the use of an automated tool estimating interictal spike frequency. Such a tool might be especially useful in the analysis of inoperable epilepsy, such as childhood absence epilepsy, since the relative reduction in spike frequency indicates effective treatment. In this work, we introduced a machine learning approach for personalized detection of focal EEG abnormalities, such as spikes and sharp waves, necessary for the automated assessment of the clinical implications of a recording. Despite not directly comparable, the presented method has higher sensitivity (=97 %) and smaller FP rate (=0.1 min−1) than most approaches proposed in the literature, thus constitutes a useful tool for automated assessment of interictal discharges in sleep EEG. Moreover, it is fully automated, template-free and can be easily extended to the detection of other waveforms. The method has not been applied in long-term (24 h) EEG recordings where physiological artifacts (from speaking, eating, etc.) disturb the signal and make interpretation much more difficult. Also, further evaluation on data from multiple individuals is required to assess the inter-subject performance. Staley KJ, Dudek FE (2006) Interictal spikes and epileptogenesis. Epilepsy Curr 6(6):199–202 Geerts AJE (2012) Detection of interictal epileptiform discharge in EEG, Master thesis, University of Twente James CJ (1997) Detection of epileptiform activity in the electroencephalogram using the electroencephalogram using artificial neural networks. Ph.D. dissertation, University of Canterbury, Christchurch Tzallas AT, Tsipouras MG, Tsalikakis DG, Karvounis EC, Astrakas L, Konitsiotis S, Tzaphlidou M (2012) Automated epileptic seizure detection methods: a review study. In Stevanovic D (ed) Epilepsy—histological, electroencephalographic and psychological aspects Wilson SB, Emerson R (2002) Spike detection: a review and comparison of algorithms. Clin Neurophysiol 113:1873–1881 Halford JJ (2009) Computerized epileptiform transient detection in the scalp electroencephalogram: obstacles to progress and the example of computerized ECG interpretation. Clin Neurophysiol 120:1909–1915 Davey BL, Fright WR, Carroll GJ, Jones RD (1989) Expert system approach to detection of epileptiform activity in the EEG. Med Biol Eng Comput 27:365–370 Dingle AA, Jones RD, Carroll GJ, Fright WR (1993) A multistage system to detect epileptiform activity in the EEG. IEEE Trans Biomed Eng 40:1260–1268 Gotman J, Wang LY (1991) State-dependent spike detection: concepts and preliminary results. Electroencephalogr Clin Neurophysiol 79:11–19 Witte H, Eiselt M, Patakova I, Petranek S, Griessbach G, Krajca V, Rother M (1991) Use of discrete Hilbert transformation for automatic spike mapping: a methodological investigation. Med Biol Eng Comput 29:242–248 Senhadji L, Dillenseger JL, Wendling F, Rocha C, Kinie A (1995) Wavelet analysis of EEG for 3 dimensional mapping of epileptic events. Ann Biomed Eng 23:543–552 Fischer G, Mars NJI, Lopez da Silva FH (1980) Pattern recognition of epileptiform transients in the electroencephalogram. Institute of Medical Physics, Utrecht De Lucia M, Fritschy J, Dayan P, Holder DS (2008) A novel method for automated classification of epileptiform activity in the human electroencephalogram based on independent component analysis. Med Biol Eng Comput 46:263–272 Gabor AJ, Seyal M (1992) Automated interictal EEG spike detection using artificial neural networks. Electroencephalogr Clin Neurophysiol 83:271–280 Inan ZH, Kuntalp M (2007) A study on fuzzy C-means clustering-based systems in automatic spike detection. Comput Biol Med 37:1160–1166 Zhang J, Zou J, Wang M, Chen L, Wang C, Wang G (2013) Automatic detection of interictal epileptiform discharges based on time-series sequence merging method. Neurocomputing 110:35–43 Indiradevi KP, Elias E, Sathidevi PS, Dinesh Nayak S, Radhakrishnan K (2008) A multi-level wavelet approach for automatic detection of epileptic spikes in the electroencephalogram. Comput Biol Med 38:805–816 Ramabhadran B, Frost JD Jr, Glover JR, Ktonas PY (1999) An automated system for epileptogenic focus localization in the electroencephalogram. J Clin Neurophysiol 16:59–68 Park HS, Lee YH, Kim NG, Lee DS, Kim SI (1998) Detection of epileptiform activities in the EEG using neural network and expert system. Medinfo 9(Pt. 2):1255–1259 Ozdamar O, Kalayci T (1998) Detection of spikes with artificial neural networks using raw EEG. Comput Biomed Res 31:122–142 International Federation of Societies for Clinical Neurophysiology (1974) A glossary of terms most commonly used by clinical electroencephalographers. Electroencephalogr Clin Neurophysiol 37(5):538–548 He X, Niyogi P (2004) Locality preserving projections. In: Thrun S, Saul LK, Scholkopf B (eds) Advances in neural information processing systems, vol 16. The MIT Press, Cambridge, p 37 van der Maaten LJP, Postma EO, van den Herik HJ (2009) Dimensionality reduction: a comparative review. Tilburg University technical report, TiCC-TR 2009-005 Belkin M, Niyogi P (2002) Laplacian Eigenmaps and spectral techniques for embedding and clustering. In: Schölkopf B, Platt JC, Hofmann T (eds) Advances in neural information processing systems, vol 14. The MIT Press, Cambridge, pp 585–591 Chang CC, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3): 1–27. http://www.csie.ntu.edu.tw/~cjlin/libsvm Zacharaki EI, Wang S, Chawla S, Yoo DS, Wolf R, Melhem ER, Davatzikos C (2009) Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn Reson Med 62:1609–1618 Zhang T, Yang J, Zhao D, Ge X (2007) Linear local tangent space alignment and application to face recognition. Neurocomputing 70(7–9):1547–1553 X. He, D. Cai, S. Yan, H.-J. Zhang, "Neighborhood preserving embedding," in Proc. ICCV 2005, vol.2, 17-21 Oct. 2005, pp.1208-1213 Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer, New York Globerson A, Roweis S (2005) Metric learning by collapsing classes. In: Mozer M, Jordan M, Petsche T (eds) Advances in neural information processing systems (NIPS). MIT press, Cambridge Agrafiotis DK (2003) Stochastic proximity embedding. J Comput Chem 24(10):1215–1221 Coifman RR, Lafon S (2006) Diffusion maps. Appl Comput Harmon Anal 21(1):5–30 Article MathSciNet MATH Google Scholar Zacharaki EI, Pippa E, Koupparis A, Kokkinos V, Kostopoulos GK, Megalooikonomou V (2013) One-class classification of temporal EEG patterns for K-complex extraction. Proc IEEE Eng Med Biol Soc 2013:5801–5804 Hostetler WE, Doller HJ, Homan RW (1992) Assessment of a computer program to detect epileptiform spikes. Electroencephalogr Clin Neurophysiol 83:1–11 Webber WR, Litt B, Wilson K, Lesser RP (1994) Practical detection of epileptiform discharges (EDs) in the EEG using an artificial neural network: a comparison of raw and parameterized EEG data. Electroencephalogr Clin Neurophysiol 91:194–204 Feucht M, Hoffmann K, Steinberger K, Witte H, Benninger F, Arnold M, Doering A (1997) Simultaneous spike detection and topographic classification in pediatric surface EEGs. NeuroReport 8:2193–2197 Wilson SB, Turner CA, Emerson RG, Scheuer ML (1999) Spike detection. Clin Neurophysiol 110:404–411 James CJ, Jones RD, Bones PJ, Carroll GJ (1999) Detection of epileptiform discharges in the EEG by a hybrid system comprising mimetic, self-organizing artificial neural network, and fuzzy logic states. Clin Neurophysiol 110:2049–2063 Sugi T, Nakamura M, Ikeda A, Shibasaki H (2002) Adaptive EEG spike detection: determination of threshold values based on conditional probability. Front Med Biol Eng 11:261–277 Acir N, Oztura I, Kuntalp M, Baklan B, Guzeli C (2005) Automatic detection of epileptiform events in EEG by a three-stage procedure based on artificial neural networks. IEEE Trans Biomed Eng 52:30–40 Argoud FI, De Azevedo FM, Neto JM, Grillo E (2006) SADE3: an effective system for automated detection of epileptiform events in long-term EEG based on context information. Med Biol Eng Comput 44:459–470 Goelz H, Jones RD, Bones PJ (2000) Wavelet analysis of transient biomedical signals and its application to detection of epileptiform activity in the EEG. Clin Electroencephalogr 31(4):181–191 Kurth C, Gilliam F, Steinhoff BJ (2000) EEG spike detection with a Kohonen feature map. Ann Biomed Eng 28:1362–1369 Liu HS, Zhang T, Yang FS (2002) A multistage, multimethod approach for automatic detection and classification of epileptiform EEG. IEEE Trans Biomed Eng 49:1557–1566 Sartoretto F, Ermani M (1999) Automatic detection of epileptiform activity by single-level wavelet analysis. Clin Neurophysiol 110(2):239–249 Latka M, Was Z (2003) Wavelet analysis of epileptic spikes. Phys Rev 67:1–4 Adjouadi M, Cabrerizo MS, Ayala M, Sanchez D, Yaylali I, Jayakar P et al (2004) A new mathematical approach based on orthogonal operators for the detection of interictal spikes in epileptogenic data. Biomed Sci Instrum 40:175–180 Adjouadi M, Sanchez D, Cabrerizo MS, Ayala M, Jayakar P, Yaylali I et al (2004) Interictal spike detection using the Walsh transform. IEEE Trans Biomed Eng 51:868–872 Exarchos TP, Tzallas AT, Fotiadis DI, Konitsiotis S, Giannopoulos S (2006) EEG transient event detection and classification using association rules. IEEE Trans Inf Technol Biomed 10:451–457 Tzallas AT, Karvelis PS, Katsis CD, Fotiadis DI, Giannopoulos S, Konitsiotis S (2006) A method for classification of transient events in EEG recordings: application to epilepsy diagnosis. Methods Inf Med 45:610–621 Van Hese P, Vanrumste B, Hallez H, Carroll GJ, Vonck K, Jones RD et al (2008) Detection of focal epileptiform events in the EEG by spatio-temporal dipole clustering. Clin Neurophysiol 119:1756–1770 The authors wish to acknowledge the contribution of Dr. V. Kokkinos from St. Luke's Hospital, Thessaloniki, Greece, who supported the collection and annotation of the EEG recordings. They also wish to thank Dr. A. Koupparis and Dr. G.K. Kostopoulos from University of Patras Medical School for valuable discussions. This study is partially funded by the European Commission under the Seventh Framework Programme (FP7/2007–2013) with grant ARMOR, Agreement Number 287720. This research has been co-financed by the European Union (European Social Fund—ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)—Research Funding Program: Thales. Investing in knowledge society through the European Social Fund. This research was partially supported by European Research Council Grant Diocles (ERC-STG-259112). Department of Computer Engineering and Informatics, University of Patras, Patras, Greece Evangelia I. Zacharaki, Iosif Mporas & Vasileios Megalooikonomou Center for Visual Computing, CentraleSupélec/Galen Team, INRIA, Paris, France Evangelia I. Zacharaki St. Luke's Hospital, Thessaloniki, Greece Kyriakos Garganis Iosif Mporas Vasileios Megalooikonomou Correspondence to Evangelia I. Zacharaki. Zacharaki, E.I., Mporas, I., Garganis, K. et al. Spike pattern recognition by supervised classification in low dimensional embedding space. Brain Inf. 3, 73–83 (2016). https://doi.org/10.1007/s40708-016-0044-4 Accepted: 24 February 2016 Issue Date: June 2016 Spike detection Manifold learning Dimensionality reduction
CommonCrawl
Elliptical orbit Physics problem Same problem here. You know the r for J(launch) so you can express J(apogee) in terms of that. The mistake that follows from this is going to this step: E(apogee)= mv0^2/8 -2GMm/5R You say that J^2/2mr^2=mv0^2/8, but this is not true. You used J^2=(mv0r/2)^2/(2mr^2). This results in the radii cancelling, but these radii are not the same These laws are namely; (1) planets move in elliptical orbits with the sun at one focus; (2) planets sweep out equal areas in equal intervals of time; and (3) the cubes of the mean distance of the planets from the sun are proportional to the squares of their periods of revolution ( 2∝ 3) The energy of the circular orbit is given by E = - = 9.97×10 10 Joules. The equation used here can also be applied to elliptical orbits with r replaced by the semimajor axis length a. The semimajor axis length is found from a = = 5.3×10 6 meters. Then E = - = 1.32×10 11 Joules. The energy of the elliptical orbit is higher Elliptical orbits problem Physics Forum Kepler's law is developed in spherical coordinates and solving the eqution of dynamics,the solution came out to be the shape of an ellipse. The equation of motion is something like this $r=c^2/(GM+d \cos(\theta))$. Here $c,d$ are constant. Hence its always stated for elliptical orbits An introduction into elliptical orbits and the conservation of angular momentum. This is at the AP Physics level or the introductory college level physics l.. In astrodynamics or celestial mechanics, an elliptic orbit or elliptical orbit is a Kepler orbit with an eccentricity of less than 1; this includes the special case of a circular orbit, with eccentricity equal to 0. In a stricter sense, it is a Kepler orbit with the eccentricity greater than 0 and less than 1. In a wider sense, it is a Kepler's orbit with negative energy. This includes the radial elliptic orbit, with eccentricity equal to 1. In a gravitational two-body problem. Elliptical planetary orbits are apparent paths of planets about their central body, which is considered static in space. By simple mechanics, it is physically impossible for a free macro body to orbit around a moving central body, in any type of geometrically closed path To make the orbits periodic, omega_2 should be 2*omega_1. I may have some of the details a little off, but that's the gist of how you'd tackle the problem. Finding the position of a body in orbit as a function of time is basically intractable, so you're stuck with a method something like the above 6 CHAPTER 9. CENTRAL FORCES AND ORBITAL MECHANICS The solution here is η(φ) = η0 cosβ(φ −δ0) , (9.28) where η0 and δ0 are initial conditions. Setting η = η0, we obtain the sequence of φ values φn = δ0 + 2πn β, (9.29) at which η(φ) is a local maximum, i.e. at apoapsis, where r = r0 + η0.Setting r = r0 −η0 is the condition for closest approach, i.e. periapsis Problem 53 Medium Difficulty. Eros has an elliptical orbit about the Sun, with a perihelion distance of $1.13 \mathrm{AU}$ and aphelion distance of 1.78 AU. What is the period of its orbit Orbital Motion The Orbital Motion Interactive is simulates the elliptical motion of a satellite around a central body. The eccentricity of the orbit can be altered. Velocity and force vectors are shown as the satellite orbits. Launch Interactive Users are encouraged to open the Interactive and explore Your piece of paper shows an elliptical orbit. Suppose that the orbit is P = 500 days in a counterclockwise direction. Where will the planet be at \((t - T) = 400\) days after perihelion passage? Calculate the true anomaly angle v and use it to mark the position of the planet along the orbit drawing an ellipse. Push two pins into a board at two points, representing the ellipse's foci. Tie a string into a loop that loosely goes around the two pins. Pull the loop taut with a pencil tip, to form a triangle. Move the pencil around while keeping the string taut. Its tip will trace out an ellipse. The constant length of the string implies that r 1+ In our Orb Lab, students solve the Kepler Problem in three steps: (1) construct an elliptical orbit, (2) measure the force F at several points r on the orbit, (3) analyze the variation of F with r to find the law of force. To construct an orbit, a team of about 10 students draws a large ellipse. Two tacks are pinned to a board See also: Circular Orbit, Hyperbolic Orbit, Orbit, Parabolic Orbit, Two-Body Problem My problem with this question comes in 2 different areas. If extra velocity is given to an object in Orbit by the orbiting body, won't it change the itself to a higher orbit, and therefore, it would never reach the second astronuat, yes I know that this orbit change is elliptical while experiencing velocity, but I am not sure anymore elliptical orbit of Mars, given to the world in the Astronomia nova, published in 1609. In order to understand the problem that Kepler set himself and to form an appreciation of his achievement in solving it, we need to know some-thing of the physics, astronomy and mathematics he had at his disposal. 1. Physics Elliptic Orbit. Motion of a satellite in an elliptical orbit around a planet. 8.01T Physics I, Fall 2004 Dr. Peter Dourmashkin, Prof. J. David Litster, Prof. David Pritchard, Prof. Bernd Surrow. Course Material Related to This Topic: Complete exam problem 5; Check solution to exam problem an ap physics problem - AP Sciences - College Confidential Forums an ap physics problem frenchbosco May 11, 2012, 10:16pm #1 <p>A satellite S is in an elliptical orbit around a planet P, as shown above, with r1 and r2 being its closest and farthest distances, respectively, from the center of the planet For the Moon's orbit about Earth, those points are called the perigee and apogee, respectively. An ellipse has several mathematical forms, but all are a specific case of the more general equation for conic sections. There are four different conic sections, all given by the equation. α r = 1+ecosθ. α r = 1 + e cos θ Basically, the orbits still look like ellipses to a very good degree, but the ellipses rotate very, very slowly (so they fail to exactly close in on themselves). This effect, known as orbital precession, is most dramatic for Mercury, where the ellipse's axes rotate by more than one degree per century If your planet (which amazingly has exactly the same parameters as Earth) has no atmosphere and you want to change to an elliptical orbit with a periapsis 400 km lower so it is tangent to the Earth's surface, then when you do your delta-v maneuver your apoapsis will still be at 400 km altitude but the periapsis is zero altitude, or 6378 km Problem 15 Medium Difficulty. A minimum-energy transfer orbit to an outer planet consists of putting a spacecraft on an elliptical trajectory with the departure planet corresponding to the perihelion of the ellipse, or the closest point to the Sun, and the arrival planet at the aphelion, or the farthest point from the Sun. (a) Use Kepler's third law to calculate how long it would take to go. PHYSICS 50.Circular/Elliptical Orbit. If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history and influence TV recommendations. To. For the two-body problem, all the orbital parameters a, e, i, Ω, and ω are constants. A sixth constant T , the time of perihelion passage (i.e., any date at which the object in orbit was known to be at perihelion), may be used to replace f , u , or l , and the position of the planet in its fixed elliptic orbit can be determined uniquely at subsequent times This is a key relationship for a larger problem in orbital mechanics known as the virial theorem. Determine the minimum energy required to place a large (five metric ton) telecommunications satellite in a geostationary orbit. A satellite of mass m is in orbit about the Earth, which has mass M and radius R Gravitation: Orbits: Problems on Orbits 1 SparkNote A special HEO orbit is the Molniya orbit named after a series of Soviet communication satellites, which used them. The L2 orbit is an elliptical orbit about the semistable second Lagrange point. It is one of the five solutions by the mathematician Joseph-Louis Lagrange in the 18th century to the three-body problem Problem #1:The Kuiper Belt is a collection of comet-sized junk which orbits the Sun with nearly circular orbits of typical radius 35 A.U. It is surmised that this reservoir of material is the source of the the so-called short perio orbit which desires to transfer via a Hohmann orbit about the sun to an orbit about another planet, such as done in a mission to Mars. In this case, the problem is no longer a two-body problem. Nevertheless, it is common (at least to get a good approximation) to decompose the problem into a series of two body problems Newton's Mathematical Proof of Elliptical Orbits If I have seen farther, it is by standing on the shoulders of giants-Isaac Newton (in a Letter to Robert Hooke, 1676) Introduction The historical roots of modern astronomy can be traced back to the first, ancien Satellite in Elliptical orbit - Physics Stack Exchang Take the limit Q→P of the se- 0.419 8.60 quence of force measures to find the exact value of 0.460 6.00 the force measure at P.7 0.560 4.00 0.607 3.66 Orbital Mechanics Laboratory 0.625 3.42 In our Orb Lab, students solve the Kepler Problem 0.644 3.46 in three steps: (1) construct an elliptical orbit, (2) 0.647 2.80 measure the force F at several points r on the orbit, (3) analyze the. As usual I started my day by reading some physics book (for me, now it doesn't matter which physics book I read, I got a habbit of reading some physics book everyday), then I came across this line: The energy of an elliptic orbit depends only on the length of the major axis of th or axis: \[ T^2 = \dfrac {4\pi^2 \alpha^3}{GM} \] 2 ic - The simplest kind of orbit is a circle, where the planet is trying to travel in a straight line which is carrying it further away from the star it's orbiting around. But the gravitational pull of the star in a particular direction is pulling it back, so it's staying at a constant distance from the star as it goes all the way around that central star The orbit stays in a plane, and the plane stays fixed in space forever unless some other force acts on the system. Only the ellipse satisfies the requirements of Newton's Laws of Motion and Gravity. For a circular orbit now, all the force is toward the center and the problem is simpler. \( GmM/r^2 = m a_c \) \( GmM/r^2 = m v_c^2 / r \ The Law of Orbits All planets move in elliptical orbits, with the sun at one focus. This is one of Kepler's laws.The elliptical shape of the orbit is a result of the inverse square force of gravity.The eccentricity of the ellipse is greatly exaggerated here I think that should suffice for your problem. The exact proof of elliptic orbit for 1/r 2 force is involved, you can find it in ch-8, Classical mechanics by Taylor. We show that the orbit eqn in polar coordinates is an eqn for comics and thus a closed orbit is either an ellipse or circle Since orbits are time reversible it takes the same burn to go from a 400 circular to an elliptical 100x400 orbit. You can see it takes about .1 km/s to de-orbit from a 400 km circular orbit. I use the vis-viva equation for much of this spreadsheet Orbits and Conservation of Energy. Determine whether the equations for speed, energy, or period are valid for the problem at hand. If not, start with the first principles we used to derive those equations Physics: 1.In classical mechanics, using Newton's laws, the ellipticity of orbits is derived. It is also said that the center of mass is at one of the foci. 2.Each body will orbit the center of the mass of the system. My question is : Are the assumptions in 1 and 2 correct? Follow up question : ~ Significance of the second focus in elliptical orbits transfer orbit (this will be one half of an ellipse) and verify your transfer time from this plot. 10.7 Orbital precession for a non-inverse square force law using Newton Use Newton to study central force motion with a non-inverse-square force law. A good starting point is the geosynchronous circular orbit studied in problem 10.5 Energy and the Elliptical Orbit Energy and the Elliptical Orbit Nettles, Bill 2009-03-01 00:00:00 In the January 2007 issue of The Physics Teacher , Prentis, Fulton, Hesse, and Mazzino 1 describe a laboratory exercise in which students use a geometrical analysis inspired by Newton to show that an elliptical orbit and an inverse-square law force go hand in hand For a circular orbit while for an ellipse . You can adjust the apoapses, the vectors from the barycenter to the farthest point of each orbit, also known as Laplace-Runge-Lenz vectors. (Apogee, aphelion, and apoastron are more familiar synonyms for apoapsis pertaining to specific celestial bodies. Elliptical Orbits and the Conservation of Angular Momentum Isaac Physics Elliptical orbit Watch. Announcements Additional assessment materials have been released - more info here >> Applying If we are given the distance of closest approach (perihelion), I would agree that we can solve the problem via conservation of energy only without the need of conservation of angular momentum Physics duo discover 13 new solutions to Newtonian three-body orbit problem. by Newton's laws of gravity—they are elliptical. Three-Body Planar Periodic Orbits, Phys. Rev. Lett. PHY 688: Numerical Methods for (Astro)Physics Ex: Highly Elliptical Orbit Consider a highly elliptical orbit: a = 1.0, e = 0.95 - Sun-grazing comet Just to get a reasonable-looking solution, we needed to use τ = 0.0005 This takes 2001 steps code: orbit-rk4-noadapt.p The orbits of comets are very different to those of planets: The orbits are highly elliptical (very stretched) or hyperbolic. This causes the speed of the comets to change significantly as its distance from the Sun changes. Not all comets orbit in the same plane as the planets and some don't even orbit in the same direction The total energy must be negative for 'a' to be positive; an elliptical orbit is a 'bound energy state'. We can then think of the total angular momentum as being a reflection of the shape of the orbit through the eccentricity in equation . There are two conserved quantities in the physics, and two parameters are needed to describe an ellipse When dealing with satellites orbiting a central body on a highly elliptical orbit, it is necessary to consider the effect of gravitational perturbations due to external bodies. Indeed, these perturbations can become very important as soon as the altitude of the satellite becomes high, which is the case around the apocentre of this type of orbit Problem #2: In order for a Kuiper belt object to become a short period comet that enters the inner solar system, the eccentricity of its orbit has to change from about 0 to pretty close to 1. Suppose this magically happens, and the orbit of a Kuiper belt object changes from a nearly circular orbit with radius 35 A.U. to a highly elliptical orbit with eccentricity of nearly 1 SatCom #4: Satellite Orbit Altitude. 21 Solvers. SatCom #2: Gain of a circular 'dish' antenna. 22 Solvers. SatCom #6: Inclination of a Sun-Synchronous Orbit. 7 Solvers. SatCom #3: Free Space Path Loss. 14 Solver elliptical orbit of Mars, given to the world in the Astronomia nova, published in 1609. In order to understand the problem that Kepler set himself and to form an appreciation of his achievement in solving it, we need to know some-thing of the physics, astronomy and mathematics he had at his disposal. 1. Physics Teacher Support [BL] Relate orbit to year and rotation to day. Be sure that students know that an object rotates on its axis and revolves around a parent body as it follows its orbit. [OL] See how many levels of orbital motion the students know and fill in the ones they don't. For example, moons orbit around planets; planets around stars; stars around the center of the galaxy, etc Similarly, the phys.org article Scientists discover more than 600 new periodic orbits of the famous three-body problem describes the discovery of other symmetrical orbits: These 695 periodic orbits include the well-known figure-eight family found by Moore in 1993, the 11 families found by Suvakov and Dmitrasinovic in 2013, and more than 600 new families reported for the first time Physics Principles of Physics: A Calculus-Based Text A comet moves in an elliptical orbit around the Sun. Which point in its orbit (perihelion or aphelion) represents the highest value of (a) the speed of the comet, (b) the potential energy of the comet-Sun system, (c) the kinetic energy of the comet, and (d) the total energy of the comet-Sun system The kinetic energies of a planet in an elliptical orbit about the Sun, at positions A,B and C are KA,KB and KC, respectively. AC is the major axis and SB is perpendicular to AC at the position of the Sun S as shown in the figure. Then (NEET 2018 Anyone who's ever taken a physics course has learned the same myth for centuries now: objects thrown on Earth trace out an elliptical orbit similar to the Moon. the problem resets once again Free AP Physics C: Mechanics practice problem - Understanding Orbits. Includes score reports and progress tracking. Create a free account today. Question #1492 Physics - Formulas - Kepler and Newton - Orbits: In 1609, Johannes Kepler (assistant to Tycho Brahe) published his three laws of orbital motion: The orbit of a planet about the Sun is an ellipse with the Sun at one Focus Elliptic orbit - Wikipedi For elliptical orbits, the point of closest approach of a planet to the Sun is called the perihelion.It is labeled point A in .The farthest point is the aphelion and is labeled point B in the figure. For the Moon's orbit about Earth, those points are called the perigee and apogee, respectively Celestial mechanics - Celestial mechanics - Perturbations and problems of two bodies: The constraints placed on the force for Kepler's laws to be derivable from Newton's laws were that the force must be directed toward a central fixed point and that the force must decrease as the inverse square of the distance. In actuality, however, the Sun, which serves as the source of the major force. newtonian mechanics - Are elliptical orbits really Our observations were made with the naked eye and a simple cross staff. If we assume an elliptical orbit of Mars and simplify matters by adopting a circular orbit for the Earth, the data of the 2016 opposition indicate that the eccentricity of Mars' orbit is 0.093 ± 0.012 All elliptical orbits with the same semi-major axis have the same period. Kepler orbits. Problem: For a satellite orbiting a planet, transfer between coplanar circular orbits can be affected by an elliptic orbit with perigee and apogee distances equal to the radii of the respective circles as shown in the Figure below A comet orbits the sun (mass m S) in an elliptical orbit of semi-major axis a and eccentricity e. (a) Find expressions for the speeds of the comet at perihelion and aphelion. (b) Evaluate these expressions for Comet Halley (see Example 13.9), and find the kinetic energy, gravitational potential energy, and total mechanical energy for this comet at perihelion and aphelion The elliptical orbits of planets were indicated by calculations of the orbit of Mars.From this, Kepler inferred that other bodies in the Solar System, including those farther away from the Sun, also have elliptical orbits.The second law helps to establish that when a planet is closer to the Sun, it travels faster orbit (or -bit) The path followed by a celestial object or an artificial satellite or spaceprobe that is moving in a gravitational field.For a single object moving freely in the gravitational field of a massive body the orbit is a conic section, in actuality either elliptical or hyperbolic.Closed (repeated) orbits are elliptical, most planetary orbits being almost circular Elliptical orbit equation Earth's orbit has an eccentricity of less than 0.02, which means that it is very close to being circular. That is why the difference between the Earth's distance from the Sun at perihelion and. The two-body problem considers two rigid point masses in mutual orbit about each other. To determine the motion of these bodies, first find the vector equations of motion. Given two bodies with masses m_1 and m_2, let \mathbf{r}_{1} be the vector from the center of mass to m_1 and \mathbf{r}_{2} be the vector from the center of mass to m_2 University Physics with Modern Physics (14th Edition) Edit edition. Problem 29E from Chapter 13: The dwarf planet Pluto has an elliptical orbit with a semi-m... Get solution orbits We study the problem of the motion of a rigid elliptical particle freely suspended in a shear flow as described by Jeffery (1922) [The motion of elliptical particles immersed in viscous fluid, Proc. Roy. Soc. A 102 161-179]. The problem is solved using oomph-lib'sinline unstructured mesh generation procedures t SOLVED:Eros has an elliptical orbit about the Su PROBLEM 4.8 A satellite is launched into Earth orbit where its launch vehicle burns out at an altitude of 250 km. At burnout the satellite's velocity is 7,900 m/s with the zenith angle equal to 89 degrees. Calculate the satellite's altitude at perigee and apogee. SOLUTION, = 89 o Equation (4.26), (Rp / r1) 1,2 = ( -C ± SQRT[ C 2 - 4 × ( The second part of the problem requires you to remember what an ellipse looks like. Quite frankly, I can't imagine how you would do this problem without drawing a picture: In the above figure, you can see Sedna's elliptical orbit about the Sun, which is located at one of the foci of the ellipse describing Sedna's orbit Craig A. Kluever, in Encyclopedia of Physical Science and Technology (Third Edition), 2003 I.B.1 The Elliptical Orbit. The eccentricity of an elliptical orbit is defined by the ratio e = c/a, where c is the distance from the center of the ellipse to either focus. The range for eccentricity is 0 ≤ e < 1 for an ellipse; the circle is a special case with e = 0 University Physics Volume 1 (0th Edition) Edit edition Solutions for Chapter 13 Problem 53P: Eros has an elliptical orbit about the Sun, with a perihelion distance of 1.13 AU and aphelion distance of 1.78 AU The center of the elliptical orbit is actually inside the Earth, and the ellipse, having an eccentricity of e = 188 / 4420, or about 0.04, is pretty close to being a circle.) The vertex closer to the end of the ellipse containing the Earth's center will be at 4420 units from the ellipse's center, or 4420 - 188 = 4232 units from the center of the Earth The ellipse may be seen to be a conic section, a curve obtained by slicing a circular cone. A slice perpendicular to the axis gives the special case of a circle. For the description of an elliptic orbit , it is convenient to express the orbital position in polar coordinates, using the angle θ Abstract: When dealing with satellites orbiting a central body on a highly elliptical orbit, it is necessary to consider the effect of gravitational perturbations due to external bodies. Indeed, these perturbations can become very important as soon as the altitude of the satellite becomes high, which is the case around the apocentre of this type of orbit Python hash file. Should I buy Asia ETF. What are THE relevant Cryptography technologies. Sälja el Vattenfall. SafeTeam Globen. Europska unia vznik. IOST wallet Ledger. Anheuser busch brands. DoopieCash vermogen. AI trader Reddit. Kyla ner solceller. Codetantra online exam Quora. Money puzzle box solution. EMQQ ETF. Skicka retur till Kina. Passoã drink enkel. Avtalspension kommunal. BITCI CoinGecko. Silver Dollar coin 2020. Civilekonom utbildning Lund. Kepler Cheuvreux Stockholm. Maskinentreprenörerna. Filtrera datum Excel. Börsen i år 2020. Jd.com prognose. Explain xkcd never seen star wars. Steuerprogramm für Kurzarbeiter kostenlos. Intrauterin fosterdöd. Lediga jobb djurskyddshandläggare. Baseball Pitching videos. Handmixer Dualit test. BAY BTC. Istikbal erbjudande. Tron price prediction. Handelsbanken Multi Asset 60 Morningstar. Caesar cipher Khan Academy. Varför bitcoin. Court crypto. Clas Ohlson TV. Vad är legering kemi. Sphere.social login.
CommonCrawl
What is 1/11 of 30? What is 1 / 11 of 30 and how to calculate it yourself 1 / 11 of 30 = 2.73 1 / 11 of 30 is 2.73. In this article, we will go through how to calculate 1 / 11 of 30 and how to calculate any fraction of any whole number (integer). This article will show a general formula for solving this equation for positive numbers, but the same rules can be applied for numbers less than zero too! Here's how we will calculate 1 / 11 of 30: 1: First step in solving 1 / 11 of 30 is understanding your fraction 1 / 11 has two important parts: the numerator (1) and the denominator (11). The numerator is the number above the division line (called the vinculum) which represent the number of parts being taken from the whole. For example: If there were 14 cars total and 1 painted red, 1 would be the numerator or parts of the total. In this case of 1 / 11, 1 is our numerator. The denominator (11) is located below the vinculum and represents the total number. In the example above 14 would be the denominator of cars. For our fraction: 1 is the numerator and 11 is the denoimator. 2: Write out your equation of 1 / 11 times 30 When solving for 1 / 11 of a number, students should write the equation as the whole number (30) times 1 / 11. The solution to our problem will always be smaller than 30 because we are going to end up with a fraction of 30. $$ \frac{ 1 }{ 11 } \times 30 $$ To convert any whole number into a fraction, add a 1 into the denominator. Now place 1 / 11 next to the new fraction. This gives us the equation below. Tip: Always write out your fractions 30 / 1 and 1 / 11. It might seem boring or taxing, but dividing fractions can be confusing. Writing out the conversion simplifies our work. $$ \frac{ 1 }{ 11 } \times \frac{ 30 }{1} $$ Once we set our equations 1 / 11 and 30 / 1, we now need to multiple your values starting with the numerators. In this case, we will be multiplying 1 (the numerator of 1 / 11) and 30 (the numerator of our new fraction 30/1). If you need a refresher on multiplying fractions, please see our guide here! $$ \frac{ 1 }{ 11 } \times \frac{ 30 }{1} = \frac{ 30 }{ 11 } $$ Then we need to do the same for our denominators. In this equation, we multiply 11 (denominator of 1 / 11) and 1 (the denominator of our new fraction 30 / 1). Our new denominator is 11. 5. Divide our new fraction (30 / 11) After solving for our new equation off 30 / 11, our last job is to simplify this problem using long division. For longer fractions, we recommend to all of our students to write this last part down and use left to right long division. $$ \frac{ 30 }{ 11 } = 2.73 $$ Multiply 30 / 1 by our fraction, 1 / 11 We get 30 / 11 from that Perform a standard division: 30 divided by 11 = 2.73 Additional way of calculating 1 / 11 of 30 You can also write our fraction, 1 / 11, as a decimal by simply dividing 1 by 11 which is 0.09. If you multiply 0.09 with 30 you will see that you will end up with the same answer as above. You may also find it useful to know that if you multiply 0.09 with 100 you get 9.0. Which means that our answer of 2.73 is 9.0 percent of 30. What is 1 / 11 of 1? What is 6 / 17 of 22? What is 9 / 15 of 18? What is 10 / 13 of 78? What is 10 / 16 of 16? What is 3 / 11 of 32? What is 3 / 7 of 62? What is 1 / 15 of 4? What is 1 / 5 of 15? What is 1 / 3 of 85? What is 5 / 7 of 15? Variables, Basics of Equations Associative Property, Commutative, Distributive, Transitive Series Mixed Fractions, Improper Fractions Speed Distance Time Area of a Circle Functions Introduction
CommonCrawl
Accuracy and data efficiency in deep learning models of protein expression Low-N protein engineering with data-efficient deep learning Surojit Biswas, Grigory Khimulya, … George M. Church Deep learning-based kcat prediction enables improved enzyme-constrained model reconstruction Feiran Li, Le Yuan, … Jens Nielsen Large-scale DNA-based phenotypic recording and deep learning enable highly accurate sequence-function mapping Simon Höllerer, Laetitia Papaxanthos, … Markus Jeschek Deep learning to predict the lab-of-origin of engineered DNA Alec A. K. Nielsen & Christopher A. Voigt Selene: a PyTorch-based deep learning library for sequence data Kathleen M. Chen, Evan M. Cofer, … Olga G. Troyanskaya Current progress and open challenges for applying deep learning across the biosciences Nicolae Sapoval, Amirali Aghazadeh, … Todd J. Treangen Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms Mohammed AlQuraishi & Peter K. Sorger Therapeutic enzyme engineering using a generative neural network Andrew Giessel, Athanasios Dousis, … Stuart Licht The evolution, evolvability and engineering of gene regulatory DNA Eeshit Dhaval Vaishnav, Carl G. de Boer, … Aviv Regev Evangelos-Marios Nikolados1, Arin Wongprommoon1, Oisin Mac Aodha2,3, Guillaume Cambray4,5 & Diego A. Oyarzún ORCID: orcid.org/0000-0002-0381-52781,2,3 Synthetic biology often involves engineering microbial strains to express high-value proteins. Thanks to progress in rapid DNA synthesis and sequencing, deep learning has emerged as a promising approach to build sequence-to-expression models for strain optimization. But such models need large and costly training data that create steep entry barriers for many laboratories. Here we study the relation between accuracy and data efficiency in an atlas of machine learning models trained on datasets of varied size and sequence diversity. We show that deep learning can achieve good prediction accuracy with much smaller datasets than previously thought. We demonstrate that controlled sequence diversity leads to substantial gains in data efficiency and employed Explainable AI to show that convolutional neural networks can finely discriminate between input DNA sequences. Our results provide guidelines for designing genotype-phenotype screens that balance cost and quality of training data, thus helping promote the wider adoption of deep learning in the biotechnology sector. Microbial production systems have found applications in many sectors of the economy1. In a typical microbial engineering pipeline, cellular hosts are transformed with heterologous genes that code for target protein products, and a key requirement is maximization of titers, productivity, and yield. Such optimization requires the design of genetic elements that ensure high transcriptional and translational efficiency2, such as promoter3 or ribosomal binding sequences4. However, prediction of protein expression is notoriously challenging and, as a result, strain development suffers from costly rounds of prototyping and characterization, typically relying on heuristic rules to navigate the sequence space towards increased production. Progress in batch DNA synthesis and high-throughput sequencing has fueled the use of deep mutational scanning to study genotype-phenotype associations. Several works have combined high-throughput mutagenesis with a diverse range of measurable phenotypes, including protein expression5,6,7,8, ribosome loading9, and DNA methylation10,11. As a result, recent years have witnessed a substantial interest in machine learning methods that leverage such data for phenotypic prediction9,12,13,14,15. In synthetic biology, recent works have incorporated machine learning into the design-build-test cycle for predictive modelling of ribosomal binding sequences16, RNA constructs17, promoters18 and other regulatory elements19. Such sequence-to-expression models can be employed as in silico platforms for discovering variants with improved expression properties, paving the way toward a new level of computer-aided design of production strains18. Deep learning algorithms, in particular, can uncover relations in the data on a scale that would be impossible by inspection alone, owing to their ability to capture complex dependencies with minimal prior assumptions20. Although deep learning models can produce highly accurate phenotypic predictions12,21,22, they come at the cost of enormous data requirements for training, typically ranging from tens to hundreds of thousands of sequences; see recent examples in Supplementary Table S1. Little attention has been paid to deep learning models in synthetic biology applications where data sizes are far below the requirements of state-of-the-art algorithms and, moreover, there is a poor grasp of what makes a good training dataset for model training. This is particularly relevant in applications where the cost of strain phenotyping is a limiting factor, as this places an upper ceiling on the number of variants that can be screened. The challenge is then to design a limited set of variants so that the resulting data can be employed to train useful predictors of protein expression. For example, if the sequence space has a broad and shallow coverage, i.e. composed of distant and isolated variants, the resulting data may be difficult to regress because each sample contains little information that correlates with expression. Conversely, if the coverage of the screen is narrow and deep, i.e. composed of closely related sequence variants, models may be accurate but generalize poorly to other regions of the sequence space. Here, we trained a large number of sequence-to-expression models on datasets of variable size and sequence diversity. We employed a large screen of superfolder GFP-producing (sfGFP) strains in Escherichia coli23 that was designed to ensure a balanced coverage of the sequence space. We sampled these data so as to construct training datasets of varying size and controlled sequence diversity. We first establish the baseline performance of a range of classic, non-deep, machine learning models trained on small datasets with various phenotype distributions and a range of strategies for encoding DNA sequences. This analysis revealed that for this particular dataset, accurate models can be trained on as few as a couple of thousand variants. We moreover show that convolutional neural networks (CNN), a common deep learning architecture, additional improve predictions without the need to acquire additional data. Using tools from Explainable AI24, we show that CNNs can better discriminate between input sequences than their non-deep counterparts and, moreover, the convolutional layers provide a mechanism to extract sequence features that are highly predictive of protein expression. We finally demonstrate that in limited data scenarios, controlled sequence diversity can improve data efficiency and predictive performance across larger regions of the sequence space. We validated this conclusion in a recent dataset of ~3000 promoter sequences in Saccharomyces cerevisiae25. Our results provide a systematic characterization of sequence-to-expression machine learning models, with implications for the wider adoption of deep learning in strain design and optimization. Size and diversity of training data We sought to compare various machine learning models using datasets of different size and diversity. To this end, we employed the genotype-phenotype association data from Cambray et al.23. This dataset comprises fluorescence measurements for an sfGFP-coding sequence in Escherichia coli, fused with more than 240,000 upstream 96nt regions that were designed to perturb translational efficiency and the resulting expression level. The library of upstream sequences was randomized with a rigorous design-of-experiments approach so as to achieve a balanced coverage of the sequence space and a controlled diversity of variants. Specifically, 96nt sequences were designed from 56 seeds with maximal pairwise Hamming distances. Each seed was subject to controlled randomization using the D-Tailor framework26, so as to produce mutational series with controlled coverage of eight biophysical properties at various levels of granularity: nucleotide sequence, codon sequence, amino acid sequence, and secondary mRNA structure (Fig. 1A). Fig. 1: Characterization of the training data. A We employed a large phenotypic screen in Escherichia coli23 of an sfGFP coding gene preceded by a variable 96nt sequence. The variable region was designed on the basis of eight sequence properties previously described as impacting translational efficiency: nucleotide content (%AT), patterns of codon usage (codon adaptation index, CAI, codon ramp bottleneck position, BtlP, and strength, BtlS), hydrophobicity of the polypeptide (mean hydrophobicity index, MHI) and stability of three secondary structures tiled along the transcript (MFE-1, MFE-2, and MFE-3). A total of 56 seed sequences were designed to provide a broad coverage of the sequence space, and then subjected to controlled randomization to create 56 mutational series of ~4000 sequences each. After removal of variants with missing measurements, the dataset contains 228,000 sequences. Violin plots show the distribution of the average value of the eight properties across the 56 mutational series; the biophysical properties were normalized to the range [0, 1] and then averaged across series. For all violins, the white circle indicates the median, box edges are at the 25th and 75th percentiles, and whiskers show the 95% confidence interval. B Two dimensional UMAP27 visualization of overlapping 4-mers computed for all 228,000 sequences; this representation reveals 56 clusters, with each cluster corresponding to a mutational series that locally explores the sequence space around its seed; we have highlighted five series with markedly distinct phenotype distributions (labels denote the series number). Other UMAP projections for overlapping 3-mers and and 5-mers are shown in Supplementary Fig. S1. C Mutational series with qualitatively distinct phenotypic distributions, as measured by FACS-sequencing of sfGFP fluorescence normalized to its maximal measured value; solid lines are Gaussian kernel density estimates of the fluorescence distribution. Measurements are normalized to the maximum sfGFP fluorescence across cells transformed with the same construct averaged over 4 experimental replicates of the whole library23. Fluorescence distributions for all mutational series are shown in Supplementary Fig. S2. The complete dataset contains 56 mutational series that provide wide coverage of the sequence space, while each series contains ~4000 sequences for local exploration in the vicinity of the seed. The dataset is particularly well suited for our study because it provides access to controllable sequence diversity, as opposed to screens that consider fully random sequences with limited coverage or single mutational series that lack diversity. To further characterize sequence diversity across the library of 56 mutational series, we visualized the distribution of overlapping 4-mers using the Uniform Manifold Approximation and Projection (UMAP) algorithm for dimensionality reduction27. The resulting two-dimensional distribution of sequences (Fig. 1B) shows a clear structure of 56 clusters, each corresponding to a mutational series. Moreover, the sfGFP fluorescence data (Fig. 1C, Supplementary Fig. S2) display marked qualitative differences across mutational series, including near-Gaussian distributions, left- and right-skewed distributions, as well as bimodal and uniform distributions. This indicates that the dataset is diverse in both genotype and phenotype space, and thus well suited for benchmarking machine learning models because it allows probing the impact of both genetic and phenotypic variation on model accuracy. Impact of DNA encoding and size of training data To understand the baseline performance of classic (non-deep) machine learning models, we trained various regressors on datasets of varying sizes and with different DNA encoding strategies (Fig. 2A). Sequence encoding is needed to featurize nucleotide strings into numerical vectors that can be processed by downstream machine learning models. We considered DNA encodings on three resolutions (Table 1, Fig. 2A): global biophysical properties (Fig. 1A), DNA subsequences (overlapping k-mers), and single nucleotide resolution (one-hot encoding). Fig. 2: Accuracy of non-deep machine learning models. A We trained models using datasets of variable size and with different strategies for DNA encoding. Sequences were converted to numerical vectors with five DNA encoding strategies (Table 1), plus an additional mixed encoding consisting of binary one-hot augmented with the biophysical properties of Fig. 1A; in all cases, one-hot encoded matrices were flattened as vectors of dimension 384. We considered four non-deep models trained on an increasing number of sequences from five mutational series with different phenotype distributions (Fig. 1B). B Impact of DNA encoding and data size on model accuracy. Overall we found that random forest regressors and binary one-hot encodings provide the best accuracy; we validated this optimal choice across the whole sequence space by training more than 5000 models in all mutational series (Supplementary Fig. S5). Phenotype distributions have a minor impact on model accuracy thanks to the use of stratified sampling for training. Model accuracy was quantified by the coefficient of determination (R2) between predicted and measured sfGFP fluorescence, computed on ~400 test sequences held-out from training and validation. The reported R2 values are averages across five training repeats with resampled training and test sets (Monte Carlo cross-validation). In each training repeat, we employed the same test set for all models and encodings. The full cross-validation results (Supplementary Fig. S4) show robust performance and little overfitting, particularly for the best performing models. C Exemplar predictions on held-out sequences for three models from panel B (marked with stars); the shown models were trained on 25% of mutational series 44 (bimodal fluorescence distribution; Fig. 1C) using 4-mer ordinal encoding. Details on model training and hyperparameter optimization can be found in the Methods, Supplementary Fig. S3, and Supplementary Tables S2–S3. Table 1 DNA encodings for model training We trained models on five mutational series chosen because their phenotype distributions are representative of those in the whole dataset (Fig. 1B, Supplementary Fig. S2), and with an increasing number of sequences for training (from ~200 to ~3000 sequences per series). Given the variation in phenotype distributions, we stratified training samples to ensure that their distribution is representative of the full series. We considered four non-deep models: ridge regressor28 (a type of penalized linear model), multilayer perceptrons29 (MLP, a shallow neural network with three hidden layers), support vector regressor30 (SVR, based on linear separation of the feature space with a radial basis function kernel), and random forest regressor31 (RF, based on axis-aligned splits of the feature space). We chose this array of models because they markedly differ in their principle of operation and underlying assumptions on the shape of the feature space. We tuned model hyperparameters using grid search and 10-fold cross-validation on datasets assembled from aggregated fractions of all mutational series; this allowed us to determine a fixed set of hyperparameters for each of the four models with good performance across the whole dataset (see Methods and Supplementary Fig. S3). In all cases, we assessed predictive accuracy using the coefficient of determination, R2 defined in Eq. (1), between measured and predicted sfGFP fluorescence computed on a set of ~ 400 test sequences (Supplementary Fig. S3) that were held-out from model training and validation. In line with expectation, the results in Fig. 2B show that models trained on small datasets are generally poor irrespective of the encoding or regression method. Linear models (ridge) display exceptionally poor accuracy and are insensitive to the size of training set. In contrast, a shallow neural network (multilayer perceptron) achieved substantial gains in accuracy with larger training sets, possibly owing to its ability to capture nonlinear relationships. Our results show that mildly accurate models (R2 ≥ 50%) can be obtained from training sets with ~1000 sequences using random forests and support vector regressors (Fig. 2C). We found random forest regressors to be the most accurate among the considered models, consistently achieving R2 ≥ 50% for datasets with more than 1000 samples and showing a stable performance when trained on other mutational series (Supplementary Fig. S5). To produce robust performance metrics, the R2 scores in Fig. 2B are averages across five training repeats with resampled training and test sets (Monte Carlo cross-validation). We also observed a sizeable impact of DNA encodings on prediction accuracy. Subsequence-resolution encodings achieve varying accuracy that is highly dependent on the specific mutational series and chosen model (Fig. 2B, Supplementary Fig. S5). Overall we found a strong preference for base-resolution encodings, with binary one-hot representations achieving the best accuracy. A salient result is that the sequence biophysical properties led to poorer accuracy than most other encodings, possibly due to their inability to describe a high-dimensional sequence space with a relatively small number of features (8). Their poor performance is particularly surprising because the biophysical properties were used to design the sequences based on their presumed phenotypic impact23; moreover, some of them (codon adaptation index, mRNA secondary structures) represent the state-of-the-art understanding of a sequence impact on translation efficiency23,32,33, while one-hot encodings lack such mechanistic information. In an attempt to combine the best of both approaches, we trained models on binary one-hot sequences augmented with the biophysical properties ("mixed" encoding in Table 1, Fig. 2B and Supplementary Fig. S5). This strategy led to slight gains in accuracy for small training sets; e.g. for ~200 training sequences, the median R2 with mixed encoding is 0.30 vs a median of 0.26 for binary one-hot (Supplementary Fig. S5). For larger training sets, however, binary one-hot encodings gave the best and most robust accuracy across models. Deep learning improves accuracy with the same amount of data Prior work has shown that deep learning can produce much more accurate predictions than non-deep models16,19. Deep learning models, however, typically require extremely large datasets for training; some of the most powerful deep learning phenotypic predictors, such as DeepBIND12, Optimus 5-prime9, ExpressionGAN34, and Enformer14 were trained with tens to hundreds of thousands of variants. In the case of sequence-to-expression models, recent literature shows a trend towards more complex and data-intensive models (see Supplementary Table S1); the most recent sequence-to-expression model employed ~20,000,000 promoter sequences to predict protein expression in Saccharomyces cerevisiae25. But it is unclear if the accuracy of such predictors results from the model architecture or simply from the sheer size of the training data. To test this idea with our data, we designed a convolutional neural network (CNN, a common type of deep learning model) with an off-the-shelf architecture of similar complexity to those employed in recent literature25. Our CNN architecture (Fig. 3A) processes a binary one-hot encoded sequence through three convolutional layers, followed by four dense layers which are equivalent to a four-layer multilayer perceptron. The convolutional layers can be regarded as weight matrices acting on an input sequence. By stacking several convolutional layers, the network can capture interactions between different components of the input. We designed the CNN architecture with a Bayesian optimization algorithm35 to determine the optimal number of network layers, as well as the optimal settings for the filters in each layer (see Supplementary Tables S4–S5 for details). In addition to the components shown in Fig. 3A, we also included a dropout layer to prevent overfitting and max pooling to reduce the number of trainable parameters. Similar as with the non-deep models in Fig. 2B, hyperparameter optimization was performed by splitting the data into separate sets for training and cross-validation (details in Methods and Supplementary Fig. S3). This allowed us to find a single CNN architecture with good performance across the individual 56 mutational series and the whole dataset. Fig. 3: Prediction accuracy of deep neural networks. A Architecture of the convolutional neural network (CNN) employed in this paper; the output is the predicted sfGFP fluorescence in relative units. The CNN architecture was designed with Bayesian optimization35 to find a single architecture for all mutational series; our strategy for hyperparameter optimization can be found in the Methods, Supplementary Fig. S3, and Supplementary Tables S4–S5. B Accuracy of the CNN in panel A trained on all mutational series. R2 values were computed on held-out sequences (10% of total) and averaged across 5 training repeats; bars denote the mean R2. C Prediction accuracy of CNNs against random forest (RF) and multilayer perceptrons (MLPs) on all 56 mutational series using binary one-hot encoding. The CNNs yield more accurate predictions with the same training data. Violin plots show the distribution of 56 R2 values for each model averaged across 5 training repeats; R2 values were computed on held-out sequences (10% of sequences per series). For all violins, the white circle indicate the median, box edges are at the 25th and 75th percentiles, and whiskers show 95% confidence interval. Inset shows predictions of a CNN trained on 75% of the mutational series with a right-skewed phenotypic distribution (Fig. 1B) computed on held-out test sequences. The CNNs are more complex than the shallow MLPs (2,702,337 vs 58,801 trainable parameters, respectively), but we also found that the CNNs outperform MLPs of comparable complexity (Supplementary Fig. S8); this suggests that improved performance is a result of the convolutional layers acting as a feature extraction mechanism. Details on CNN training can be found in the Methods and Supplementary Fig. S7. D Average R2 scores for each model across all 56 mutational series using 75% of sequences for training. When trained on up to 75% of the full dataset (~160,000 sequences), our CNN model produced excellent predictions in test sets covering broad regions of the sequence space (average R2 = 0.82 across five cross-validation runs, Fig. 3B and Supplementary Fig. S6). This suggests data size alone may be sufficient for training accurate regressors. However, since data of such scale are rarely available in synthetic biology applications, we sought to determine the capacity of CNNs to produce accurate predictions from much smaller datasets than previously considered. To this end, we trained CNNs with the same architecture in Fig. 3A on each mutational series, using ~1000–3000 sequences in each case; details on CNN training can be found in the Methods, Supplementary Fig. S7, and Supplementary Tables S4–S5. We benchmarked the accuracy of the CNNs against non-deep models trained on the same 56 mutational series. As benchmarks we chose two non-deep models: a shallow perceptron (MLP) because it is also a type of neural network, and a random forest regressor because it showed the best performance so far (Fig. 2B). We found that CNNs are consistently more accurate than non-deep models, regardless of the size of the training data (Fig. 3C–D) and across most of the 56 mutational series. In fact, in more than half of mutational series, the CNNs achieve accuracy over 60% with ~1000 training sequences, and in some cases they reach near state-of-the-art accuracy (R2 = 0.87 averaged across five cross-validation runs, Fig. 3C inset). When trained on ~3000 sequences, the CNNs outperformed the MLP in all mutational series, and the random forest regressor in all but four series (Fig. 3D). To understand why CNNs provide such improved accuracy without larger training data, we performed extensive comparisons against deep MLPs of similar complexity that lack the convolutional layers. We note that the CNNs in Fig. 3C has ~45-fold more trainable parameters than the MLPs, which suggests that such additional complexity may be responsible for the improved predictive accuracy. We thus sought to determine if increasing MLP depth could bring their performance to a level comparable to the CNNs. We trained MLPs with an increasing number of hidden layers on ~3000 sequences from each mutational series. We found that the additional layers provide marginal improvements in accuracy, and that the performance gap between CNNs and MLPs exists even when both have a similar number of trainable parameters (Supplementary Fig. S8). This suggests that the higher accuracy of the convolutional network stems from its inbuilt inductive bias that enables it to capture local structure via the learned filters and more global structure through successive convolutions36. As a result, it can capture interactions between different components of the input and produce sequence embeddings that are highly predictive of protein expression. To further determine how both neural networks process the input sequences, we employed methods from Explainable AI to quantify their sensitivity upon changes in input sequences. We utilized DeepLIFT37, a computationally efficient method that produces importance scores for each feature of the input; such scores are known as "attribution scores" in the Explainable AI literature24. When applied to one-hot encoded sequences, DeepLIFT produces scores at the resolution of single nucleotides (Fig. 4A). We employed these scores to compute pairwise distances between sequences processed by the same model. The shorter that distance, the more the two sequences are detected as similar by the model. We computed such distances for all pairs of sequences in each test set processed by the MLP or CNN. The matrices of pairwise distances (Fig. 4B) were then subjected to hierarchical clustering as a means to contrast the diversity of responses elicited by test sequences on the two models. Using k-means clustering, we showed that the CNN produces less clustered attribution distances than the MLP (Fig. 4C), thus highlighting the ability of the convolutional layers to discriminate input sequences with finer granularity. This trend was found in all but four of the CNNs (Supplementary Fig. S10). Fig. 4: Sensitivity to input sequences using Explainable AI. A DeepLIFT37 attribution scores per nucleotide position for a given test sequence and trained model. Panels show scores of 30 sequences chosen at random from the same test set employed in Fig. 3C for models trained on 75% of mutational series 21. B Attribution distances for models trained on series 21. We computed the cosine distance between DeepLIFT scores for each sequence in the test set. Distance heatmaps were hierarchically clustered to highlight the cluster structure that both models assign to the input sequences. C K-means clustering of the distance matrices in panel B. Line plots show the optimal k-means score averaged across 20 runs with random initial cluster assignments. Lower scores for all values of k suggests that the MLP clusters sequences more heavily than the CNN; we found this pattern in all but four mutational series (Supplementary Fig. S10). Impact of sequence diversity on model coverage A well-recognized caveat of sequence-to-expression models is their limited ability to produce accurate predictions in regions of the sequence space not covered by the training data25,38; this is commonly referred to as generalization performance in the machine learning jargon. In line with expectation, we found that the CNNs from Fig. 3C, which were trained on a single mutational series each, performed poorly when tested on other mutational series (R2≤0 for most models, Supplementary Fig. S11A); we observed similarly poor results for the non-deep models in Fig. 2 (Supplementary Fig. S11B). Negative R2 scores indicate an inadequate model structure with a poorer fit than a baseline model that predicts the average observed fluorescence for all variants. This means that models trained on a particular region of the sequence space are too specialized, and their phenotypic predictions do not generalize to distant sequences. Although poor generalization can be caused by model overfitting, our cross-validation results (see Supplementary Fig. S6A and Supplementary Fig. S7) rule out this option and suggest that it is rather a consequence of the large genotypic differences between mutational series, compounded with the high-dimensionality of the sequence space. Recent work by Vaishnav and colleagues demonstrated that model generalization can be improved with CNNs of similar complexity to ours25 trained on large data (~20,000,00 variants). Since the cost of such large screens is prohibitive in most synthetic biology applications, we sought to understand how model coverage could be improved in scenarios where data size is strongly limited. The idea is to design a sequence space for training that can enlarge the high-confidence regions of the predictors with a modest number of variants. To this end, we performed a computational experiment designed to test the impact of sequence diversity on the ability of CNNs to produce accurate predictions across different mutational series. We trained CNNs on datasets of constant size but increasing sequence diversity (Fig. 5A, Supplementary Fig. S12). We considered an initial model trained on 5800 sequences sampled from the aggregate of two series chosen at random, e.g. 2900 sequences from series 13 and 23, respectively (Fig. 5A top row). We successively added two series to the aggregate and retrained a CNN while keeping a constant number of total sequences. This results in sparser sampling from each mutational series and an increasingly diverse training set. For example, the second model (Fig. 5A) was trained on 1450 sequences from series 13, 23, 48 and 55, respectively. Overall, we trained a total of 27 models, the last of which comprises as few as 107 sequences per mutational series. The resulting models display substantial variations in their predictive power (Fig. 5A). Most models displayed variable R2 scores across different series, and we identified two salient patterns: some series that are consistently well predicted even in small data scenarios (e.g. series 31 and 51), and some series are particularly hard to regress (e.g. series 28 and 54), which possibly require a bespoke CNN architecture different from the one in Fig. 3A. The results also show that increased diversity has a minor impact on model generalization; although some series not included in training do have improved prediction scores (e.g. series 53 in Fig. 5A), we suspect this is likely a result of series being particularly easy to regress. In general, we observed patterns of low or negative R2 scores for series not included in the aggregate. Similar results were observed for other random choices of mutational series employed for training (Supplementary Fig. S12). Fig. 5: Impact of sequence diversity on data efficiency and model coverage. A We trained CNNs on datasets of constant size and increasing sequence diversity. We trained a total of 27 models by successively aggregating fractions of randomly chosen mutational series into a new dataset for training; the total size of the training was kept constant at 5800 sequences. Training on aggregated sequences achieves good accuracy for mutational series in the training set, but poor predictions for series not included in the training data. This suggests that CNNs generalize poorly across unseen regions of the sequence space. Accuracy is reported as the R2 computed on 10% held-out sequences from each mutational series. We excluded two series from training to test the generalization performance of the last model. B Bubble plot shows the R2 values averaged across all mutational series for each model. Labels indicate the model number from panel A, and insets show schematics of the sequence space employed for training; for clarity, we have omitted model 1 from the plot. Improved sequence diversity leads to gains in predictive accuracy across larger regions of the sequence space; we observed similar trends for other random choices of series included in the training set (Supplementary Fig. S12). The decreasing number of training sequences per series reflects better data efficiency, thanks to an increasingly diverse set of training sequences. To quantify sequence diversity, we counted the occurrence of unique overlapping 5-mers across all sequences of each training set, and defined diversity as \(1/\mathop{\sum }\nolimits_{i=1}^{100}{c}_{i}\), where ci is the count of the i-th most frequent 5-mers. Crucially, the results in Fig. 5B suggest that increased sequence diversity enlarges the region where the CNN can produce accurate predictions without increasing the size of the training data. We found that R2 > 30% in many regions of the sequence space can be achieved by models trained on just over a hundred sequences from those regions (e.g. model 27 in Fig. 5A). For comparison, the CNN trained on all series without controlled diversity can double that accuracy, but with a 9-fold increase in the size of the training data (R2 = 0.65 for N = 53,480 in Fig. 3B). This means that model coverage can be enlarged with shallow sampling of previously unseen regions of the sequence space, which provides a useful guideline for experimental design of screens aimed at training models on a limited number of variants. To test the validity of this principle in a different expression chassis and construct library, we repeated the analysis in Fig. 5 using a recent genotype-phenotype screen of promoter sequences in Saccharomyces cerevisiae25. These data are comparable to the screen in Cambray et al.23 in the sequence length (80nt) and its highly clustered coverage of genotypic space (Fig. 6A). This clustered structure results from the design of the library itself, which is composed of 3929 variants of 199 natural promoters. A key difference between this new dataset and Cambray et al.23 is the construct architecture; unlike the UTR sequences in Fig. 1B, promoter sequences account for regulatory effects but do not undergo transcription. Akin to our results in Fig. 5, we aimed at testing the accuracy of machine learning regressors trained on datasets of constant size but increasing sequence diversity. Since this dataset contains a small number of variants for each gene (on average 20 variants/gene, see inset of Fig. 6A), we first randomly aggregated the variant clusters into twelve groups containing an average of 327 sequences/group. We subsequently trained five Random Forest models on N = 400 binary one-hot encoded sequences drawn from different groups. For example, as shown in the Fig. 6B, model 1 was trained on 200 sequences from two groups, whereas model 2 was trained on 100 variants from four groups. The training results (Fig. 6B) show a strikingly similar pattern to those observed in our original dataset in Fig. 5, thus strongly suggesting that sequence diversity can be exploited to train models with broader coverage and improved data efficiency. Fig. 6: Sequence-to-expression models using promoter data from Saccharomyces cerevisiae. A Genotypic space of yeast promoter data from Vaishnav et al.25 visualized with the UMAP27 algorithm for dimensionality reduction; sequences were featurized using counts of overlapping 4-mers, as in Fig. 1B. The dataset contains 3929 promoter variants (80nt long) of 199 native genes, as well as fluorescence measurements of a yellow fluorescent protein (YFP) reporter; inset shows the distribution of variants per gene across the whole dataset. B Bubble plots show the accuracy of five random forest (RF) models trained on datasets of constant size and increasing sequence diversity, following a similar strategy as in Fig. 5A. We first aggregated variant clusters into twelve groups, and then trained RF models by aggregating fractions of randomly chosen groups into a new dataset for training; the total size of the training set was kept constant at 400 sequences. Accuracy was quantified with the R2 score averaged across test sets from each group (~30 sequences/group) that were held out from training. Inset shows model accuracy in each test set. In line with the results in Fig. 5A, we observe that model coverage can be improved by adding small fractions of each group into the training set; we observed similar trends for other random choices of groups included in the training set (Supplementary Fig. S13). Details on data processing and model training can be found in the Methods and Supplementary Text. Sequence diversity was quantified as in Fig. 5B. Progress in high-throughput methods has led to large improvements in the size and coverage of genotype-phenotype screens, fuelling an increased interest in deep learning algorithms for phenotypic prediction9,12,13,14,16,17,19,34. Synthetic biology offers a host of applications that would benefit from such predictors, e.g. for optimization of protein-producing strains39, selection of enzymatic genes in metabolic engineering40, or the design of biosensors41. An often-overlooked limitation, however, is that deep learning models require huge amounts of data for training, and the sheer cost of the associated experimental work is a significant barrier for most laboratories. Recent sequence-to-expression models have focused primarily on datasets with tens to hundreds of thousands of training sequences (Supplementary Table S1). While large data requirements are to be expected for prediction from long sequences such as entire protein coding regions, synthetic biologists often work with much shorter sequences to control protein expression levels (e.g. promoters3, ribosomal binding sequences4, terminators42 and others). From a machine learning standpoint, shorter sequences offer potential for training models with smaller datasets, which can lower the entry barriers for practitioners to adopt deep learning for strain optimization. Here, we examined a large panel of machine learning models, with particular emphasis on the relation between prediction accuracy and data efficiency. We used data from an experimental screen in which sequence features were manipulated using a Design of Experiments approach to perturb the translation efficiency of an sfGFP reporter in E. coli23. Thousands of local mutations were derived from more than fifty sequence seeds, yielding mutational series that enable deep focal coverage in distinct areas of the sequence space (Fig. 1B). By suitable sampling of these data, we studied the impact of the size and diversity of training sequences on the quality of the resulting machine learning models. Our analysis revealed two key results that can help incentivize the adoption of machine and deep learning in strain engineering. First, in our dataset we found that the number of training sequences required for accurate prediction is much smaller than what has been shown in the literature so far8,12,16,17,25. Traditional non-deep models can achieve good accuracy with as few as 1000–2000 sequences for training (Fig. 2B). We moreover showed that deep learning models can further improve accuracy with the same amount of data. For example, our convolutional neural networks achieved gains of up to 10% in median prediction scores across all mutational series when trained on the same 2000 sequences as the non-deep models (Fig. 3C). Such performance improvement is a conservative lower bound, because we employed a fixed network architecture for all mutational series; further gains in accuracy can be obtained with custom architectures for different mutational series. Second, we found that sequence diversity can be exploited to increase data efficiency and enlarge the sequence space where models produce reliable predictions. Using two different datasets with a similar structure of their sequence coverage, the E. coli library from Cambray et al.23 as well as a recently published library of S. cerevisiae promoters25, we showed that machine learning models can expand their predictions to entirely new regions of the sequence space by training on a few additional samples from that region (Figs. 5, 6). This means that controlled sequence diversity can improve the coverage of sequence-to-expression models without the need for more training data. In other words, instead of utilizing fully randomized libraries for training8,16,17,18, it may be beneficial to first design few isolated variants for coverage, and then increase the depth with many local variants in the vicinity of each seed. Our work strongly suggests that such balance between coverage and depth can be advantageous in small data scenarios, where fully randomized libraries would lead to datasets with faraway and isolated sequences that inherently require large datasets to achieve high accuracy. This principle is conceptually akin to the idea of "informed training sets" introduced by Wittmann and colleagues43 in the context of protein design, which can provide important benefits in cases where data efficiency is a concern. Our observations raise exciting prospects for Design of Experiments strategies for training predictors of protein expression that are both accurate and data-efficient. Data requirements above 1000 sequences are still too costly for most practical applications. Further work is thus required on DNA encodings that are maximally informative for training, as well as model architectures that can deliver high accuracy for small datasets. Both strategies have proven highly successful in protein engineering44,45, yet their potential for DNA sequence design remains largely untapped. We found that seemingly superficial changes to DNA encodings, e.g. from binary one-hot to ordinal one-hot encodings (Fig. 2B), can have substantial impact on predictive performance. Moreover, although biophysical properties such as the CAI or the stability of mRNA secondary structures are not good predictors by themselves17, we observed small but encouraging improvements when these were employed in conjunction with one-hot encodings, particularly for small datasets. This suggests that richer mechanistic descriptors, e.g. by including positional information or base-resolution pairing probabilities of secondary structures, may yield further gains in accuracy. In agreement with other works46, we observed that sequence-to-expression models generalize poorly: their accuracy drops significantly for sequences that diverge from those employed for training. This limitation is particularly relevant for strain engineering, where designers may employ predictors to navigate the sequence space beyond the coverage of the training data. A recent study by Vaishnav et al. illustrated that these models can indeed generalize well using a massive training set with over 20,000,000 sequences25. Data of such scale are far beyond the capacity of most laboratories, and therefore it appears that poor generalization is likely to become the key limiting factor in the field. We suggest that careful design of training libraries in conjunction with algorithms for controlled sequence design38 may help to improve sequence coverage and avoid low-confidence regions of the predictors. Deep learning models promise to deliver large gains in efficiency across a range of synthetic biology applications. Such models inevitably require training data and there is a risk that the associated experimental costs become an obstacle for many laboratories. In this work we have systematically mapped the relation between data size, diversity and the choice of machine learning models. Our results demonstrate the viability of more data-efficient deep learning models, helping to promote their adoption as a platform technology in microbial engineering. Data sources and visualization The E. coli dataset presented by Cambray et al.23 was obtained from the OpenScience Framework47. After removing sequences with missing values for sfGFP fluorescence and growth rate, the dataset contains ~228,000 sequences. In all trained models, we employed the arithmetic mean of sfGFP fluorescence across replicates for the case of normal translational initiation23. To visualize sequences in a two dimensional space (Fig. 1B), we employed the UMAP algorithm27 v0.5.1 on sequences featurized on counts of overlapping k-mers. We found that the UMAP projection improved for larger k, and chose k = 4 to achieve a good trade-off between computation time and quality of projection (Supplementary Fig. S1); k-mer counting was done with custom Python scripts. In all cases, fluorescence measurements were normalized to the maximum sfGFP fluorescence across cells transformed with the same construct averaged over 4 experimental replicates of the whole library23. Training, validation, and test data In Supplementary Fig. S3A we illustrate our strategy to partition the full dataset into sets for training, cross-validation and model testing. For each mutational series, we first perform a split retaining 10% of sequences as a fixed held-out set for model testing. We use the remaining sequences as a development set and perform a second split to obtain two partitions for each series. The first partition is for model training and comprises 3200 sequences from which we used varying fractions for training regressors in each series. The second partition was employed for hyperparameter optimization, containing ~400 sequences from each series (10% of the whole series) that we then merged into a large validation set comprising 22,400 sequences (56 series × 400 sequences per series) from all series. We kept the validation set fixed and employed it for hyperparameter optimization of both non-deep and deep models. In all data splits, we stratified the sfGFP fluorescence data to ensure that the phenotype distributions are preserved. Stratification was done with the verstack package, which employs binning for continuous variables; we further customized the code to gain control of the binning resolution. Model training Non-deep machine learning models DNA encodings (Table 1) were implemented with custom Python code, and all non-deep models were trained using the scikit-learn Python package. To determine model hyperparameters, we used a validation set for all combinations of encodings and regressors. As illustrated in Supplementary Fig. S3B, for each model we explored each the hyperparameter search space (Supplementary Table S3) for all encodings using grid search with 10-fold cross-validation on 90% of our validation set (~20,000 sequences), using mean squared error (MSE) as performance metric. This resulted in six hyperparameter configurations for each regressor (one for each encoding). For many regressors, we found that the same configuration was optimal for several encodings simultaneously, and we thus settled on most frequent configuration among the six encodings; in case of a tie between configurations, we settled for the one with the best MSE computed on the remaining 10% of our whole validation set. CNNs were trained on Tesla K80 GPUs from Google Colaboratory. To design the CNN architectures, we use the Sequential class of the Keras package with the TensorFlow backend48,49. All CNNs were trained on binary one-hot encoded sequences with mean squared error as loss function, batch size of 64, learning rate 1 × 10−3, and using the Adam optimizer50. Since Adam computes adaptive learning rates for each weight of the neural network, we found that the default options were adequate and did not specify a learning rate schedule. We set the maximum number of epochs to 100, and used 15 epochs without loss improvement over the validation set as early stopping criterion to prevent overfitting. Model hyperparameters were selected with Bayesian optimization implemented in the HyperOpt package35. Specifically, as shown in Supplementary Figure S3C, we performed five iterations of the HyperOpt routine using 90% of our validation set (~20,000 sequences), where subsets of the search space were evaluated (Supplementary Table S4). We used the Tree of Parzen Estimators (TPE)51 as acquisition function, and set the number of architecture combinations to 50. This resulted in five candidate architectures, from which we chose the one with the best validation MSE computed on a stratified sample of size 10% of the whole validation set. The resulting model architecture is described in Supplementary Table S6. To verify that the selected architecture works best for our study, we performed an additional test (Supplementary Fig. S9) where we trained CNNs of varying width and depth and compared them to the results in Fig. 3C. To achieve this, we perturbed the number of convolutional filters and layers, for width and depth respectively, and trained the resulting architectures using 75% of sequences for each mutational series (Supplementary Fig. S9). Model testing In all cases we did five training repeats on resampled training sets and a fixed test set. Model accuracy was computed as coefficient of determination (R2) on held-out sequences, averaged across five training repeats. The R2 score for each training repeat was defined as: $${R}^{2}=1-\frac{{\sum }_{i}{({y}_{i}-{f}_{i})}^{2}}{{\sum }_{i}{({y}_{i}-\bar{y})}^{2}},$$ where yi and fi are the measured and predicted fluorescence of the ith sequence in the test set, respectively, and \(\bar{y}\) is the average fluorescence across the whole test set. Note that for a perfect fit we have R2 = 1, and conversely R2 = 0 for baseline model that predicts the average fluorescence (i.e. \({f}_{i}=\bar{y}\) for all sequences). Negative R2 scores thus indicate an inadequate model structure with worse predictions than the baseline model. Interpretability analysis For the interpretability results in Fig. 4A–C, we employed DeepLIFT37 which utilizes back-propagation to produce importance or "attribution" scores for input features, with respect to a baseline reference input. We chose a blank sequence as a reference. We used the GenomicsDefault option that implements Rescale and RevealCancel rules for convolutional and dense layers, respectively. The line plots in Fig. 4A are the attribution scores of 30 random test sequences for the CNN and MLP models trained on mutational series 21. The distance heatmaps in Fig. 4B were produced by computing the cosine distance between vectors of attribution scores, and then using hierarchical clustering to compare both models. The degree of clustering was quantified by k-means scores (Fig. 4C); lower scores suggest more clustering of the distance matrix. Results for all other mutational series can be found in Supplementary Figure S10. Impact of sequence diversity Escherichia coli dataset The models in Fig. 5 were trained on data of constant size and increasing sequence diversity. We successively aggregated fractions of mutational series to create new training sets with improved diversity. We employed the same CNN architecture and training strategy as in Fig. 3A with the same hyperparameters (Supplementary Table S6) for all 27 models. To ensure a comparison solely on the basis of diversity, we fixed the size of the training set to 5800 sequences. To increase diversity, for successive models we sampled training sequences from two additional series, as shown in Fig. 5. The specific series for the aggregates were randomly chosen; four training repeats with randomized selection of series can be found in Supplementary Figure S12. Saccharomyces cerevisiae dataset We obtained the promoter dataset presented in Supplementary Fig. 4F in Vaishnav et al.25 from CodeOcean52. The data contains 3929 yeast promoter sequences with YFP fluorescence readouts. To visualize the yeast sequences (Fig. 6A), we employed the same strategy as in Fig. 1B for the E. coli dataset, and used the UMAP algorithm for counts of overlapping 4-mers. Additional details can be found in the Supplementary Text. For the models in Fig. 6B, we first aggregated sequences from the clusters in Fig. 6A into twelve groups. We then employed the same strategy as in Fig. 5, and successively aggregated fractions of groups to create new training sets with improved diversity. We used the same Random Forest configuration (Supplementary Table S6) for all 5 models. We fixed the size of the training set to 400 sequences, and to increase diversity for successive models, we sampled training sequences from two additional groups at a time (Fig. 6B). The specific groups for the aggregates were randomly chosen; four training repeats with randomized selection of groups can be found in Supplementary Fig. S13. Additional details can be found in the Supplementary Text. The genotype-phenotype data employed in this study come from two literature sources23,25. For reproducibility, both datasets have been cleaned and reorganized in a form suitable for machine learning analyses; the cleaned data used in this study are available in Zenodo53 at https://doi.org/10.5281/zenodo.7273952. Python code for model training, data analysis, and data plotting can be run from Google Colaboratory and is available in Zenodo53 at https://doi.org/10.5281/zenodo.7273952. Terpe, K. Overview of bacterial expression systems for heterologous protein production: from molecular and biochemical fundamentals to commercial systems. Appl. Microbiol. Biotechnol. 72, 211–222 (2006). Sørensen, H. P. & Mortensen, K. K. Advanced genetic strategies for recombinant protein expression in Escherichia coli. J. Biotechnol. 115, 113–128 (2005). Blazeck, J. & Alper, H. S. Promoter engineering: recent advances in controlling transcription at the most fundamental level. Biotechnol. J. 8, 46–58 (2013). Salis, H. M., Mirsky, E. A. & Voigt, C. A. Automated design of synthetic ribosome binding sites to control protein expression. Nat. Biotechnol. 27, 946–950 (2009). Kinney, J. B., Murugan, A., Callan, C. G. & Cox, E. C. Using deep sequencing to characterize the biophysical mechanism of a transcriptional regulatory sequence. Proc. Natl Acad. Sci. 107, 9158–9163 (2010). Sharon, E. et al. Inferring gene regulatory logic from high-throughput measurements of thousands of systematically designed promoters. Nat. Biotechnol. 30, 521–530 (2012). Kosuri, S. et al. Composability of regulatory sequences controlling transcription and translation in escherichia coli. Proc. Natl Acad. Sci. 110, 14024–14029 (2013). de Boer, C. G. et al. Deciphering eukaryotic gene-regulatory logic with 100 million random promoters. Nat. Biotechnol. 38, 56–65 (2020). Sample, P. J. et al. Human 5' utr design and variant effect prediction from a massively parallel translation assay. Nat. Biotechnol. 37, 803–809 (2019). Raad, M. D., Modavi, C., Sukovich, D. J. & Anderson, J. C. Observing biosynthetic activity utilizing next generation sequencing and the dna linked enzyme coupled assay. ACS Chem. Biol. 12, 191–199 (2017). Yus, E., Yang, J.-S., Sogues, A. & Serrano, L. A reporter system coupled with high-throughput sequencing unveils key bacterial transcription and translation determinants. Nat. Commun. 8, 1–12 (2017). Alipanahi, B., Delong, A., Weirauch, M. T. & Frey, B. J. Predicting the sequence specificities of dna-and rna-binding proteins by deep learning. Nat. Biotechnol. 33, 831–838 (2015). Valeri, J. A. et al. Sequence-to-function deep learning frameworks for engineered riboregulators. Nat. Commun. 11, 1–14 (2020). Avsec, Ž. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat. Methods.18, 1196–1203 (2021). Puchta, O. et al. Genotype-phenotype map of an rna-ligand complex. bioRxiv (2020) .https://doi.org/10.1101/2020.12.17.423258 Höllerer, S. et al. Large-scale DNA-based phenotypic recording and deep learning enable highly accurate sequence-function mapping. Nat. Commun. 11, 1–15 (2020). Angenent-Mari, N. M., Garruss, A. S., Soenksen, L. R., Church, G. & Collins, J. J. A deep learning approach to programmable rna switches. Nat. Commun. 11, 1–12 (2020). Kotopka, B. J. & Smolke, C. D. Model-driven generation of artificial yeast promoters. Nat. Commun. 11, 1–13 (2020). Cuperus, J. T. et al. Deep learning of the regulatory grammar of yeast 5' untranslated regions from 500,000 random sequences. Genome Res. 27, 2015–2024 (2017). Camacho, D. M., Collins, K. M., Powers, R. K., Costello, J. C. & Collins, J. J. Next-generation machine learning for biological networks. Cell 173, 1581–1592 (2018). Zhou, J. & Troyanskaya, O. G. Predicting effects of noncoding variants with deep learning–based sequence model. Nat. Methods. 12, 931–934 (2015). Kelley, D. R., Snoek, J. & Rinn, J. L. Basset: learning the regulatory code of the accessible genome with deep convolutional neural networks. Genome Res. 26, 990–999 (2016). Cambray, G., Guimaraes, J. C. & Arkin, A. P. Evaluation of 244,000 synthetic sequences reveals design principles to optimize translation in escherichia coli. Nat. Biotechnol. 36, 1005 (2018). Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K. & Müller, K.-R. Explainable AI: interpreting, explaining and visualizing deep learning, vol. 11700 (Springer Nature, 2019). Vaishnav, E. D. et al. The evolution, evolvability and engineering of gene regulatory DNA. Nat. 2022 603:7901 603, 455–463 (2022). Guimaraes, J. C., Rocha, M., Arkin, A. P. & Cambray, G. D-Tailor: automated analysis and design of DNA sequences. Bioinformatics 30, 1087–1094 (2014). McInnes, L., Healy, J. & Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018). Hastie, T., Tibshirani, R. & Friedman, J. The elements of statistical learning: data mining, inference, and prediction (Springer Science & Business Media, 2009). Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning internal representations by error propagation. Tech. Rep., California Univ San Diego La Jolla Inst for Cognitive Science (1985). Drucker, H. et al. Support vector regression machines. Adv. neural Inf. Process. Syst. 9, 155–161 (1997). Breiman, L. Random forests. Mach. Learn. 45, 5–32 (2001). Kudla, G., Murray, A. W., Tollervey, D. & Plotkin, J. B. Coding-sequence determinants of gene expression in escherichia coli. Science 324, 255–258 (2009). Quax, T. E., Claassens, N. J., Söll, D. & van der Oost, J. Codon Bias as a Means to Fine-Tune Gene Expression. Mol. Cell. 59, 149–161 (2015). Zrimec, J. et al. Controlling gene expression with deep generative design of regulatory DNA. Nat. Commun. 13, 5099 (2022). Bergstra, J., Yamins, D. & Cox, D. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In JMLR Workshop and Conference Proceedings, vol. 28, 115–123 (2013). Gehring, J., Auli, M., Grangier, D., Yarats, D. & Dauphin, Y. N. Convolutional sequence to sequence learning. In International conference on machine learning, 1243–1252 (2017). Shrikumar, A., Greenside, P. & Kundaje, A. Learning important features through propagating activation differences. In Proc. of the 34th International Conference on Machine Learning - Volume 70, 3145–3153 (JMLR.org, 2017). Linder, J., Bogard, N., Rosenberg, A. B. & Seelig, G. A generative neural network for maximizing fitness and diversity of synthetic dna and protein sequences. Cell Syst. 11, 49–62.e16 (2020). Volk, M. J. et al. Biosystems design by machine learning. ACS Synth. Biol. 9, 1514–1533 (2020). Jang, W. D., Kim, G. B., Kim, Y. & Lee, S. Y. Applications of artificial intelligence to enzyme and pathway design for metabolic engineering. Curr. Opin. Biotechnol. 73, 101–107 (2022). Verma, B. K., Mannan, A. A., Zhang, F. & Oyarzún, D. A. Trade-offs in biosensor optimization for dynamic pathway engineering. ACS Synth. Biol. 11, 228–240 (2022). Tarnowski, M. J. & Gorochowski, T. E. Massively parallel characterization of engineered transcript isoforms using direct rna sequencing. Nat. Commun. 2022, 13, 1–14 (2022). Wittmann, B. J., Yue, Y. & Arnold, F. H. Informed training set design enables efficient machine learning-assisted directed protein evolution. Cell Syst. 12, 1026–1045.e7 (2021). Biswas, S., Khimulya, G., Alley, E. C., Esvelt, K. M. & Church, G. M. Low-N protein engineering with data-efficient deep learning. Nat. Methods. 18, 389–396 (2021). Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. In Proc. of the National Academy of Sciences of the United States of America 118 (2021). Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface. 15, 20170387 (2018). Cambray, G. Data and scripts for "Evaluation of 244,000 synthetic sequences reveals design principles to optimize translation in Escherichia coli". OSF https://doi.org/10.17605/OSF.IO/A56VU (2019). Chollet, F. et al. Keras: Deep learning library for theano and tensorflow. https://keras.io7 (2015). Abadi, M. et al. Tensorflow: Large-scale machine learning on heterogeneous systems. arXiv (2016) https://doi.org/10.48550/arXiv.1603.04467. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Bergstra, J., Bardenet, R., Bengio, Y. & Kégl, B. Algorithms for hyper-parameter optimization. Advances in neural information processing systems 24 (2011). https://codeocean.com/capsule/8020974/tree/v1 (2022). Nikolados, E.-M., Wongprommoon, A., Mac Aodha, O., Cambray, G. & Oyarzún, D. A. Code and data for "Accuracy and data efficiency in deep learning models of protein expression". Zenodo https://doi.org/10.5281/zenodo.7273952 (2022). E.M.N. was supported by a doctoral studentship from the Darwin Trust of Edinburgh. D.A.O. was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1). School of Biological Sciences, University of Edinburgh, Edinburgh, EH9 3JH, UK Evangelos-Marios Nikolados, Arin Wongprommoon & Diego A. Oyarzún School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, UK Oisin Mac Aodha & Diego A. Oyarzún The Alan Turing Institute, London, NW1 2DB, UK Diversité des Génomes et Interactions Microorganismes Insectes, University of Montpellier, INRAE UMR 1333, Montpellier, France Guillaume Cambray Centre de Biologie Structurale, University of Montpellier, INSERM U1054, CNRS UMR5048, Montpellier, France Evangelos-Marios Nikolados Arin Wongprommoon Oisin Mac Aodha Diego A. Oyarzún E.M.N. and D.A.O. designed the research and analyzed data. E.M.N. performed model implementation and training. A.W. tested code implementation and provided general feedback. G.C. and O.M.A. provided counsel on data analysis and computational aspects. D.A.O. provided overall supervision and direction of the work. Correspondence to Diego A. Oyarzún. Nature Communications thanks Guy-Bart Stan and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Nikolados, EM., Wongprommoon, A., Aodha, O.M. et al. Accuracy and data efficiency in deep learning models of protein expression. Nat Commun 13, 7755 (2022). https://doi.org/10.1038/s41467-022-34902-5
CommonCrawl
Integrated Research Training Group Porous Media Perspectives Press & Communication Pretty Porous – Alles Porös Status Seminars Status Seminar 2019 Core Study Programme Seminars and Summer Schools Research Stay Abroad SFB 1313 Summer School 2019 Scientific Poster Short Course: Birgit Lukowski (2023) Turbulent Porous Medium Flow Short Course: Xu Chu (2022) Free Flow Coupling Short Course: Rainer Helmig (2022) Transport of Viruses Short Course: Majid Hassanizadeh (2021) Linear Poro-Elasticity Short Course: Holger Steeb (2021) Theory of Poroelasticity: Holger Steeb (2021) Experimental Techniques I + II: Dongwong Lee, Matthias Ruf, Holger Steeb, and Sabina Haber-Pohlmeier (2021) Mathematical Theory for Simulation Practitioners Course: Peter Knabner (2021) Fracture Reactivation and Propagation Short Course: Inga Berre (2021) Mesoscale Simulations Short Course: Jens Harting (2021) Capillarity Short Course: Majid Hassanizadeh (2021) Multiscale Short Course: Hadi Hajibeygi (2020) Homogenization Short Course: Sorin Pop (2020) Multiphase Short Course: Majid Hassanizadeh (2019) Solvers Short Course: Florin Radu (2019) Transport Short Course: Peter Knabner (2018) Dumux Short Course: Bernd Flemisch (2018) Challenges Short Course: Rainer Helmig (2018) SRP NUPUS NUPUS Research Structure Participating Institutes of the University of Stuttgart National and International Partner Institutes NUPUS Meetings NUPUS Scholarschips Holders SFB 1313 in the Media Project Area A Project Area B Project Area C Project Area D Project INF Project Ö Project Z01 Project Z02 (PML) Internal Research Projects Associated Researchers Research Project A-X1 Research Project B-X1 Research Project C-X1 Research Project D-X1 Internal Research Project I-01 Research Project A01 Research Project B01 Research Project C01 Research Project D01 Publications in scientific journals Published datasets Spokesmen Postdoctoral Researchers and Associated Researchers SFB 1313 Boards Mercator Fellows Development and realisation of validation benchmarks Bernd Flemisch Sergey Oladyshkin Farid Mohammadi Thomas Fetzer (Alumnus) Published data sets Mohammadi, F., Eggenweiler, E., Flemisch, B., Oladyshkin, S., Rybak, I., Schneider, M., & Weishaupt, K. (2021). Uncertainty-aware Validation Benchmarks for Coupling Free Flow and Porous-Medium Flow. In Water Resour. Res. (preprint). https://arxiv.org/abs/2106.13639 Scheurer, S., Silva, A. S. R., Mohammadi, F., Hommel, J., Oladyshkin, S., Flemisch, B., & Nowak, W. (2020). Surrogate-based Bayesian Comparison of Computationally Expensive Models: Application to Microbially Induced Calcite Precipitation. https://arxiv.org/abs/2011.12756 Jaust, A., Weishaupt, K., Mehl, M., & Flemisch, B. (2020). Partitioned Coupling Schemes for Free-Flow and Porous-Media Applications with Sharp Interfaces. In R. Klöfkorn, E. Keilegavlen, F. A. Radu, & J. Fuhrmann (Eds.), Finite Volumes for Complex Applications IX - Methods, Theoretical Aspects, Examples (pp. 605--613). Springer International Publishing. https://doi.org/10.1007/978-3-030-43651-3_57 (Journal-) Articles Cheng, K., Lu, Z., Xiao, S., Oladyshkin, S., & Nowak, W. (2022). Mixed covariance function kriging model for uncertainty quantification. International Journal for Uncertainty Quantification, 12(3), 17--30. Kröker, I., & Oladyshkin, S. (2022). Arbitrary multi-resolution multi-wavelet-based polynomial chaos expansion for data-driven uncertainty quantification. Reliability Engineering &amp$\mathsemicolon$ System Safety, 108376. https://doi.org/10.1016/j.ress.2022.108376 Seitz, G., Mohammadi, F., & Class, H. (2021). Thermochemical Heat Storage in a Lab-Scale Indirectly Operated CaO/Ca(OH)2 Reactor—Numerical Modeling and Model Validation through Inverse Parameter Estimation. Applied Sciences, 11(2), 682. https://doi.org/10.3390/app11020682 Berre, I., Boon, W. M., Flemisch, B., Fumagalli, A., Gläser, D., Keilegavlen, E., Scotti, A., Stefansson, I., Tatomir, A., Brenner, K., Burbulla, S., Devloo, P., Duran, O., Favino, M., Hennicker, J., Lee, I.-H., Lipnikov, K., Masson, R., Mosthaf, K., … Zulian, P. (2021). Verification benchmarks for single-phase flow in three-dimensional fractured porous media. Advances in Water Resources, 147, 103759. https://doi.org/10.1016/j.advwatres.2020.103759 Oladyshkin, S., Mohammadi, F., Kroeker, I., & Nowak, W. (2020). Bayesian3 Active Learning for the Gaussian Process Emulator Using Information Theory. Entropy, 22(8), 890. https://doi.org/10.3390/e22080890 Oladyshkin, S., & Nowak, W. (2019). The Connection between Bayesian Inference and Information Theory for Model Selection, Information Gain and Experimental Design. Entropy, 21(11), 1081. https://doi.org/10.3390/e21111081 Schneider, M., Gläser, D., Flemisch, B., & Helmig, R. (2018). Comparison of finite-volume schemes for diffusion problems. Oil & Gas Science and Technology – Revue d'IFP Energies Nouvelles, 73, 82. https://doi.org/10.2516/ogst/2018064 This project's goal is a statistical framework for the comparative evaluation of the SFB's computational models through uncertainty-aware validation benchmarks. Here, the main challenge arises from the possibly large uncertainties present in the experimental data and the simulation results. A so-called validation metric that compares system response quantities of an experiment with those from a computational model has to incorporate parameter and conceptual uncertainties rigorously. Bayesian validation framework To assess the overall accuracy of the computational models participating in the benchmarks under uncertainty of both the simulation results and the experimental data, we have developed a Bayesian calibration and validation framework that incorporates a probabilistic modelling technique quantifying remaining post-calibration uncertainty. Figure 1 shows an overview of this framework. Within this framework, the parametric uncertainty of a computational model, based on the modeler's expert knowledge, is propagated to obtain a so-called prior predictive distribution. Then, in the calibration step, a posterior knowledge is achieved by updating the prior belief based on Bayesian notions, incorporating all sources of errors in the experiment and the model. During this process, global sensitivity indices can be calculated that rank the influence of the uncertain model parameters. An overview of the Bayesian validation framework. Photo: SFB 1313 / University of Stuttgart In the validation of a single model, the hypothesis is whether this model can satisfactorily represent the real system of interest. The calibrated model's results are compared to a new unseen set of experimental data, corresponding to an evaluation of the validation metric. Moreover, several models might exist with different approaches and assumptions to analyse the occurring processes. For this case, a comparative validation is performed, where the hypothesis is which model within the pool of available models can make the best prediction regarding the observed values in the experiments. To provide a quantitative validation metric and an objective model ranking, we use Bayesian model evidence employing the so-called Bayes factor introduced in Bayesian hypothesis testing. Propagating the parametric uncertainty through the given computationally demanding models in the framework is not feasible via the Monte Carlo or even Markov chain Monte Carlo approaches for the competing physical models. Therefore, one main focus of the project has been to substitute the original computational models with easy-to-evaluate surrogates, based on recent developments in the theory of polynomial chaos expansion (PCE). We used a data-driven arbitrary PCE (aPCE), which can operate with probability measures implicitly and incompletely defined via statistical moments. We have extended aPCE to a Bayesian sparse aPCE (BsaPCE) representation through the Bayesian sparse learning using a fast marginal likelihood maximisation algorithm. Further, we have introduced a sampling strategy to sequentially refine the surrogate model to avoid clusters in specific regions of the polynomial representation. In a model comparison study resulting from a successful collaboration between SFB members within Project Area B during the first SFB 1313 summer school in September 2019, we have compared two models describing flow in fractured porous media against experiments. The first investigated model, denoted by B01, originated from Project B01-Keip and employed a phase-field representation of fractures and a finite-element discretisation. The second model, indicated by B03, from B03-Rohde used a discrete-fracture-matrix method and a finite-volume discretisation. We have analysed these two models against microfluidic experiments performed by B05-Steeb/Nowak. Figure 2 illustrates the Bayes factor's distribution as a key measure obtained from applying our validation framework to the benchmark case described above. It also shows the related significance levels with vertical solid lines. We conclude that both models are equivalent in the sense that there is no substantial evidence to prefer one over the other based on their predictive capability for the selected scenario because the probability density distribution of the Bayes factor is close to one. Bayes factor for models B01 and B03 Apart from applying the Bayesian validation framework to individual projects, other substantial benchmarking efforts have been performed in the context of Project Area B. In preliminary work, verification benchmark cases for single-phase flow in two-dimensional fractured porous media have been defined, and the performance of several discretisation methods has been evaluated. By distributing the CFP and organising a corresponding mini-symposium at the SIAM GS 2019 in March 2019, a group of 26 participants representing 17 different discrete-fracture-matrix methods formed, which led to a publication. We have recently performed a Bayesian assessment of computational models describing geochemical processes in subsurface reservoirs. These models predict the change in material properties of porous media due to microbial activities. This study assessed a full complexity microbially induced calcite precipitation model (MFC) along with two simplifications. The first simplified model is denoted as initial biofilm model (MIB), considering that the biofilm is already established at the beginning. The second simplified model assumes that all the urea injected into the system precipitates as calcite, and it is denoted as simple chemistry model (MSC). We have used our framework components to perform a Bayesian justifiability analysis to judge the competing models' performance against the available experimental data (MD) and assess similarities between models. Bayesian model selection and justifiability analysis for calcium concentration over increasing amount of experimental data Figure 3 illustrates the model ranking against the experimental data and their predictions using so-called model weights for predicting the calcium concentration. The first columns of the so-called model confusion matrices show the competing models' ability (MFC, MIB and MSC) to replicate the experimental data (MD). Other entries indicate the probability that the model Mk (rows) is the data-generating process of the predictions made by the model Ml (columns). The off-diagonal entries indicate similarities between the models, where the initial biofilm model MIB has been identified to be similar to the full complexity model MFC. The main-diagonal entries represent the models' ability to identify their own predictions. Our analysis indicated that the simple chemistry model MSC and the full complexity model MFC identify themselves best. We also observed that the experimental data set size is too small to justify the initial biofilm model MIB in terms of the calcium concentration. In the context of thermochemical heat storage, a computational model for an indirectly operated CaO/Ca(OH)2 reactor was calibrated and validated using our framework. By employing Bayesian inference, the reaction rate's decrease with progressing conversion of reactive material could be identified to be essential for the desired match with experimental results. A correspondingly calibrated model revealed that more heat is lost over the reactor surface than transported in the heat transfer channel, which caused a considerable speed-up of the discharge reaction. We validated the calibrated model with the second set of experimental results. The computational model was replaced by a surrogate based on PCE and principal component analysis. We were also involved in developing an interactive demonstrator for geothermal energy sites using the on-the-fly CUDA-based evaluation of aPCE surrogates. The interactive demonstrator was presented in SFB 1313's public science exhibition "Pretty Porous - Alles Porös", shown at the Planetarium Stuttgart from 18 June to 31 August 2020. Visitors could interactively explore the underground heat transport at a geothermal energy site modifying its properties via a touch screen. We have initiated an uncertainty-aware validation benchmark to address the question of how to appropriately conceptualise the interface conditions and related modelling parameters for coupling free flow and porous-medium flow. The considered computational models consist of the coupled Stokes-Darcy model with the classical set of interface conditions, the pore-network model developed in A02-Helmig/Weigand, and the generalised interface conditions for the REV-scale model from A03-Rybak. We defined data extraction points for velocity and pressure for both the calibration and the validation phase concerning system response quantities. As the data-generating reference solution process, we considered a pore-scale resolved model. With the assessment of the pore-network model, we illustrate briefly one particular component in the following. Figure 4 illustrates preliminary results, where the pore-network model captures the reference process very accurately in the porous medium and as well in the free-flow domain. We evaluated the two REV-scale models in a similar manner and quantified the influence of the uncertain modelling parameters in terms of Bayes factors. Velocity posterior distribution of the pore network model after validation against the reference pore-scale model at the selected points in porous medium (top plots) and free flow (bottom plots) We will continue collaborating with A03-Rybak and A02-Helmig/Weigand concerning the comparison and validation of models for coupling free flow with the porous-medium flow. In particular, we plan to conduct an open benchmarking process based on the existing single-phase cases and define new two-phase cases with the help of the experimental data from A06-Lamanna/Poser and our external partners at Utrecht University. For further information please contact apl. Prof. Dr. rer. nat. Principal Investigator, Research Project D03 Profile page apl. Prof. Dr.-Ing.
CommonCrawl
For statistics about Wikipedia, see Wikipedia:Statistics. Statistics is a branch of mathematics dealing with data collection, organization, analysis, interpretation and presentation. Descriptive statistics summarize data. Inferential statistics make predictions. Statistics helps in the study of many other fields, such as science, medicine, economics, psychology, politics and marketing. Someone who works in statistics is called a statistician. In addition to being the name of a field of study, the word "statistics" also refers to numbers that are used to describe data or relationships. 2 Collecting data 3 Descriptive and inferential statistics 4 Methods 4.1 Errors 5 Descriptive statistics 5.1 Finding the middle of the data 5.1.1 Mean 5.1.2 Median 5.1.3 Mode 5.2 Finding the spread of the data 5.3 Other descriptive statistics History[change | change source] The first known statistics are census data. The Babylonians did a census around 3500 BC, the Egyptians around 2500 BC, and the Ancient Chinese around 1000 BC. Starting in the 16th century mathematicians such as Gerolamo Cardano developed probability theory, which made statistics a science. Since then, people have collected and studied statistics on many things. Trees, starfish, stars, rocks, words, almost anything that can be counted has been a subject of statistics. Collecting data[change | change source] Before we can describe the world with statistics, we must collect data. The data that we collect in statistics are called measurements. After we collect data, we use one or more numbers to describe each observation or measurement. For example, suppose we want to find out how popular a certain TV show is. We can pick a group of people (called a sample) out of the total population of viewers. Then we ask each viewer in the sample how often they watch the show. The sample is data that you can see, and the population is data that you cannot see (since you did not ask every viewer in the population). For another example, if we want to know whether a certain drug can help lower blood pressure, we could give the drug to people for some time and measure their blood pressure before and after. Descriptive and inferential statistics[change | change source] Numbers that describe data that you can see are called descriptive statistics. Numbers that make predictions about data that you can't see are called inferential statistics. Descriptive statistics involves using numbers to describe features of data. For example, the average height of women in the United States is a descriptive statistic that describes a feature (average height) of a population (women in the United States). Once the results have been summarized and described they can be used for prediction. This is called Inferential Statistics. As an example, the size of an animal is dependent on many factors. Some of these factors are controlled by the environment, but others are by inheritance. A biologist might therefore make a model that says that there is a high probability that the offspring will be small in size if the parents were small in size. This model probably allows to predict the size in better ways than by just guessing at random. Testing whether a certain drug can be used to cure a certain condition or disease is usually done by comparing the results of people who are given the drug against those of people who are given a placebo. Methods[change | change source] Most often we collect statistical data by doing surveys or experiments. For example, an opinion poll is one kind of survey. We pick a small number of people and ask them questions. Then, we use their answers as the data. The choice of which individuals to take for a survey or data collection is important, as it directly influences the statistics. When the statistics are done, it can no longer be determined which individuals are taken. Suppose we want to measure the water quality of a big lake. If we take samples next to the waste drain, we will get different results than if the samples are taken in a far away, hard to reach, spot of the lake. There are two kinds of problems which are commonly found when taking samples: If there are many samples, the samples will likely be very close to what they are in the real population. If there are very few samples, however, they might be very different from what they are in the real population. This error is called a chance error (see Errors and residuals in statistics). The individuals for the samples need to be chosen carefully, usually they will be chosen randomly. If this is not the case, the samples might be very different from what they really are in the total population. This is true even if a great number of samples is taken. This kind of error is called bias. Errors[change | change source] We can reduce chance errors by taking a larger sample, and we can avoid some bias by choosing randomly. However, sometimes large random samples are hard to take. And bias can happen if different people are not asked, or refuse to answer our questions, or if they know they are getting a fake treatment. These problems can be hard to fix. See also standard error. Descriptive statistics[change | change source] Finding the middle of the data[change | change source] The middle of the data is called an average. The average tells us about a typical individual in the population. There are three kinds of average that are often used: the mean, the median and the mode. The examples below use this sample data: Name | A B C D E F G H I J score| 23 26 49 49 57 64 66 78 82 92 Mean[change | change source] The formula for the mean is x ¯ = 1 N ∑ i = 1 N x i = x 1 + x 2 + ⋯ + x N N {\displaystyle {\bar {x}}={\frac {1}{N}}\sum _{i=1}^{N}x_{i}={\frac {x_{1}+x_{2}+\cdots +x_{N}}{N}}} Where x 1 , x 2 , … , x N {\displaystyle x_{1},x_{2},\ldots ,x_{N}} are the data and N {\displaystyle N} is the population size. (see Sigma Notation). This means that you add up all the values, and then divide by the number of values. In our example x ¯ = ( 23 + 26 + 49 + 49 + 57 + 64 + 66 + 78 + 82 + 92 ) / 10 = 58.6 {\displaystyle {\bar {x}}=(23+26+49+49+57+64+66+78+82+92)/10=58.6} The problem with the mean is that it does not tell anything about how the values are distributed. Values that are very large or very small change the mean a lot. In statistics, these extreme values might be errors of measurement, but sometimes the population really does contain these values. For example, if in a room there are 10 people who make $10/day and 1 who makes $1,000,000/day. The mean of the data is $90,918/day. Even though it is the average amount, the mean in this case is not the amount any single person makes, thus is useless for some purposes. This is the "arithmetic mean". Other kinds are useful for some purposes. Median[change | change source] The median is the middle item of the data. To find the median we sort the data from the smallest number to the largest number and then choose the number in the middle. If there is an even number of data, there will not be a number right in the middle, so we choose the two middle ones and calculate their mean. In our example there are 10 items of data, the two middle ones are "57" and "64", so the median is (57+64)/2 = 60.5. Another example, like the income example presented for the mean, consider a room with 10 people who have incomes of $10, $20, $20, $40, $50, $60, $90, $90, $100, and $1,000,000, the median is $55 because $55 is the average of the two middle numbers, $50 and $60. If the extreme value of $1,000,000 is ignored, the mean is $53. In this case, the median is close to the value obtained when the extreme value is thrown out. The median solves the problem of extreme values as described in the definition of mean above. Mode[change | change source] The mode is the most frequent item of data. For example, the most common letter in English is the letter "e". We would say that "e" is the mode of the distribution of the letters. For example, if in a room there are 10 people with incomes of $10, $20, $20, $40, $50, $60, $90, $90, $90, $100, and $1,000,000, the mode is $90 because $90 occurs three times and all other values occur fewer than three times. There can be more than one mode. For example, if in a room there are 10 people with incomes of $10, $20, $20, $20, $50, $60, $90, $90, $90, $100, and $1,000,000, the modes are $20 and $90. This is bi-modal, or has two modes. Bi-modality is very common and often indicates that the data is the combination of two different groups. For instance, the average height of all adults in the U.S. has a bi-modal distribution. This is because males and females have separate average heights of 1.763 m (5 ft 9 + 1⁄2 in) for men and 1.622 m (5 ft 4 in) for women. These peaks are apparent when both groups are combined. The mode is the only form of average that can be used for data that can not be put in order. Finding the spread of the data[change | change source] Another thing we can say about a set of data is how spread out it is. A common way to describe the spread of a set of data is the standard deviation. If the standard deviation of a set of data is small, then most of the data is very close to the average. If the standard deviation is large, though, then a lot of the data is very different from the average. If the data follows the common pattern called the normal distribution, then it is very useful to know the standard deviation. If the data follows this pattern (we would say the data is normally distributed), about 68 of every 100 pieces of data will be off the average by less than the standard deviation. Not only that, but about 95 of every 100 measurements will be off the average by less that two times the standard deviation, and about 997 in 1000 will be closer to the average than three standard deviations. Other descriptive statistics[change | change source] We also can use statistics to find out that some percent, percentile, number, or fraction of people or things in a group do something or fit in a certain category. For example, social scientists used statistics to find out that 49% of people in the world are males. Media related to Statistics at Wikimedia Commons Retrieved from "https://simple.wikipedia.org/w/index.php?title=Statistics&oldid=6575663"
CommonCrawl
Tribology Letters June 2017 , 65:59 | Cite as A Deterministic Stress-Activated Model for Tribo-Film Growth and Wear Simulation Aydar Akchurin Rob Bosman First Online: 31 March 2017 A new model was developed for the simulation of growth and wear of tribo-chemical films by combining a boundary element method-based contact model and a stress-activated Arrhenius tribo-film growth equation. Using this methodology, it is possible to predict the evolution and steady-state thickness of the tribo-film (self-limitation) at various operating conditions. The model was validated using two cases for which experimental data were available in the literature. The first case is a single microscopic contact consisting of a DLC-coated AFM tip and an iron-coated substrate. The second case is a macroscale contact between a bearing steel ball and disk. Subsequently, mild wear (wear after running-in) was modeled by assuming diffusion of the substrate atoms into the tribo-film. Tribo-film growth Stress-activated growth Self-limitation Wear simulation In tribological systems operating in the boundary or mixed lubrication regime, protection of the surfaces from severe wear is frequently obtained by so-called anti-wear (AW) additives added to the base lubricant. Zinc dialkyldithiophosphate (ZDDP) is one of the most widely used additives [1]. So far, it is well known that the anti-wear mechanism of ZDDP is related to the formation of a protective layer—generally called `tribo-film.' It is formed through the chemical decomposition of ZDDP, it has a heterogeneous structure, and it is up to 200 nm thick [2]. The ZDDP tribo-film is typically observed at the surfaces of steels, but was also found on the surfaces of other materials, such as aluminum–silicon alloys [3], DLC [4], silicon [5], tungsten carbide [6], and ceramics [7]. The structure of the films formed on various materials was reported to be similar [5, 6, 8]. In the case of steels, it mainly consists of oxygen, phosphate, sulfide, zinc, and iron, with an increase in the concentration of iron closer to the bulk material [1, 9]. The top surface may be covered by an iron-free zinc polyphosphate glass, which has a highly amorphous structure, while the bulk is composed of pyro- or orthophosphate glasses [10]. There is consensus on the fact that the tribo-film is continuously worn and replenished and has a sacrificial function [11, 12]. The exact mechanism of its growth and anti-wear action is however still under debate [10]. According to the hard and soft acids and bases theory (HSAB), hard abrasive iron oxide wear particles react with phosphate glasses to form softer less abrasive iron sulfides [13], thus preventing severe wear. Due to continuous generation and subsequent digestion of the oxide particles (or surface oxide layers), the tribo-film is replenished [14]. The theory, however, may not explain the generation of a tribo-film on nonferrous surfaces, such as DLC [4], silicon [5], other metals [6], and ceramics [7], although it was recently hypothesized that HSAB may be applied in nonferrous cases as well [15]. Based on molecular dynamic simulations, Mosey et al. [16] concluded that a tribo-film is formed due to the pressure-induced cross-linking of ZDDP molecules at pressures higher than 7 GPa. However, such high pressures are higher than the yield stress of most materials used in common engineering applications, so this is not realistic. It was also observed that tribo-films can grow at high temperatures (>150 °C) without any contact, suggesting that the chemical reactions also take place due to the thermal activation. However, in the case of rubbing, the growth rate is much higher [17]. Initially, the tribo-film growth was described using an Arrhenius equation [17, 18]. However, Bulgarevich et al. [19] showed that this leads to very low (unrealistic) activation energies. In a different paper, Bulgarevich et al. [20] introduced a stress dependence to the Arrhenius equation by means of a multiplication factor [20]. Very recently, Gosvami et al. [5] employed an Arrhenius equation where the activation energy was made stress-dependent. The equation was found to fit the growth of the ZDDP tribo-film observed in AFM measurements, while the obtained values for the activation energies were now similar to those typically found in thermo-activated reactions. Zhang and Spikes [6] confirmed the stress-activated Arrhenius relation for ZDDP film growth in a series of macroscale experiments. They also showed that the growth rate is governed by the shear stress, rather than the normal stress. This fact is confirmed by experiments showing that tribo-films do not grow in pure rolling contacts, not even at high pressures [21]. There are a number of wear models that include the presence of a tribo-film. Bosman and Schipper [11, 22] developed a mechano-chemical model to calculate wear rates in the mild wear regime. It was assumed that the growth of the tribo-film can be described with a diffusion type of process, while the wear was calculated from the volume of plastic deformation of the tribo-film. Andersson et al. [23] only included the influence of temperature and employed Arrhenius' equation, following So et al. [24]. Ghanbarzadeh et al. [25] proposed a semi-deterministic model for tribo-film growth following Bulgarevich et al. [20]. The influence of the stress was taken into through a correction factor. One of the characteristics of the tribo-film growth is its self-limitation. The ZDDP molecules react with material on the surface. With the growth of the tribo-film, the concentration of this material decreases and so will the growth of the film. Ultimately, the growth rate will be equal to the wear rate. To model this behavior, a maximum tribo-film thickness was introduced in previously developed models as a fitting parameter to the growth equation. In the current work, the concept of `stress-activated growth,' recently proposed by Gosvami et al. [5], was adopted. This approach is closer to the physics and chemistry of tribo-film growth and wear, which makes the application of the model outside the domain in which the various parameters have been fitted more reliable. They assumed that the self-limitation is a result of the evolution of the mechanical properties. When sliding starts, there is no tribo-film and the contact pressure (and therefore tangential stress) at the asperity level is determined by the hardness and Young's modulus of the bulk material of the bodies in contact. Since typically the substrates are harder and less compliant than the tribo-film, relatively high pressures are generated. According to the stress-activated growth concept, high contact pressure decreases the effective activation energy of the reaction and results in a high reaction rate, i.e., a high tribo-film growth rate. With further development of the film, the lower tribo-film hardness and/or Young's modulus start to become relevant and gradually reduce the contact pressure until the growth rate is balanced by the wear rate. By using this concept, it is possible to build a model to predict the growth of the tribo-film up to its steady-state value. In the current paper, a mechano-chemical model was developed to simulate the tribo-film growth and concurrent wear in the presence of a ZDDP additive. The growth rate of the tribo-film was calculated using a stress-activated Arrhenius equation. A layered elastic-fully plastic model was used to include the influence of the tribo-film on the contact stress and its growth rate. The model was developed based on the experimental data available in the literature, i.e., measurements of tribo-film growth under an AFM tip and in a macroscale contact. The software for the simulation is available online [26]. 2 Numerical Model and Tribo-Film Growth 2.1 Layered Elastic-Fully Plastic Contact Models In the current work, a layered elastic-fully plastic contact model was used for the simulation of the tribo-film growth. The contact pressures can be obtained by solving the following system of equations [27]: $$\left\{ {\begin{array}{*{20}l} {u\left( {x,y} \right) = z\left( {x,y} \right) - h_{s} \left( {x,y} \right), \forall x,y \in A_{c} } \\ {p\left( {x,y} \right) > 0,\forall x,y \in A_{c} } \\ {p\left( {x,y} \right) \le H} \\ {F_{C} = {\iint }p\left( {x,y} \right) dxdy} \\ \end{array} ,} \right.$$ where the deflection \(u\left( {x,y} \right)\) can be calculated using the half-space approximation [28]. The model was implemented using the discrete fast Fourier transformation technique (DC-FFT) [29], since the deflection \(u\left( {x,y} \right)\) is a discrete convolution: $$u\left( {x,y} \right) = \left( {K + S} \right) \otimes p,$$ where K and S are the influence matrices for normal and tangential stresses (tangential stress is \(f_{C}^{b} \cdot p\)). For a layered body, expressions for K + S are given in the frequency domain [30, 31]. The model assumes elastic-fully plastic behavior, i.e., a cutoff pressure is applied in the numerical solver. 2.2 Tribo-Film Growth and its Mechanical Model Following Gosvami et al. [5] and Zhang and Spikes [6], a stress-activated Arrhenius model was employed to calculate the growth of the tribo-film. The growth rate of the tribo-film is given by the following equation: $$\left( {\frac{\partial h}{\partial t}} \right)_{g} = \tilde{\varGamma }_{0} e^{{\frac{{\Delta U_{\text{act}} - \tau \cdot \Delta V_{\text{act}} }}{{k_{B} T}}}} ,$$ where \(\Delta U_{\text{act}}\) is the internal activation energy (in the absence of stress), \(\Delta V_{\text{act}}\) is the activation volume, \(\tilde{\varGamma }_{0}\) is a pre-factor,\(k_{B}\) and \(T\) are the Boltzmann's constant and absolute temperature. The value of \(\tilde{\varGamma }_{0} = 10^{ - 2} \frac{m}{s}\) was taken from [5], while \(\Delta V_{\text{act}}\) and \(\Delta U_{\text{act}}\) were used to fit experimental data. The shear stress was obtained from \(\tau = \mu p\), where \(\mu\) is the friction coefficient and P is the contact pressure. The friction coefficient of ZDDP films varies only slightly within a range of 60–100 °C [32], and a constant value of \(\mu = 0.1\) was used in all simulations. According to Eq. (3) and assuming a constant coefficient of friction, the growth rate depends on pressure. The contact pressure is determined by Young's modulus and by the hardness of the tribo-film and substrate. It is reported [33] that, due to the underlying substrate, the hardness of the tribo-film varies with the height of the tribo-film and plastic penetration depth. Unfortunately, it was not possible to calculate the plastic penetration depth using the available simulation tools. It was therefore chosen to use the empirical variation of the hardness with the tribo-film height from [33], as shown in Fig. 1. Hardness evolution with thickness, [33] It should be noted that Young's modulus was found to be largely independent of the temperature [33, 34], i.e., within 25–200 °C (at 220 °C, the ZDDP tribo-film will start to degrade [35]), while the hardness does depend on temperature [33]. Bosman and Schipper [36] proposed a compensation factor for the hardness as a function of temperature. The hardness at a given temperature can then be obtained by using the initial hardness given in Fig. 1 multiplied by a compensation factor from Fig. 2. Compensation factor [36] It should be noted that the hardness evolution of a tribo-film as a function of film thickness and temperature is likely to be dependent on the ZDDP molecule type and the substrate properties. It may be then too simplistic to combine the results of Figs. 1, 2, since they were obtained at different experiments. However, for the general simulations as considered in the current work, the proposed hardness evolution model is suitable since it follows the trends of the tribo-film hardness decay with film thickness and temperature. To get better quantitative description, one would need to evaluate the hardness at various temperatures and thicknesses at particular experimental conditions and incorporate it to the model. The temperature increase was calculated using the approach of Bosman and De Rooij [37], and it was found to be less than 1 °C for both micro- and macroscale simulations. This is consistent with the findings of Fujita and Spikes [17]. Temperature variations due to frictional heating were therefore neglected in the current simulations. 2.3 Tribo-Film Wear Model One of the most widely used wear equations is the linear Archard wear equation [38]. This relation does not take into account the presence of a tribo-film. This may be the reason for it to fail to fit experimental data in some cases [39, 40, 41]. In the current work, a simple linear relation of tribo-film wear to its height \(h\) was used. The wear rate was calculated using the following equation: $$\left( {\frac{\partial h}{\partial t}} \right)_{w} = \alpha h,$$ where \(\alpha\) is a fitting parameter and H is the tribo-film thickness. According to this relation, wear rate increases with the growth of the tribo-film. This is consistent with experimental observation of Fujita and Spikes [42]. This is ascribed to a difference in wear resistance between the fraction of the film close to the surface (low wear resistance) compared and that of the bulk of the film (higher wear resistance). Equation (4) shows the same behavior. 2.4 Evolution of the Tribo-Film and the Flowchart of the Model The evolution of the tribo-film was calculated from the balance of growth and wear. The change in the tribo-film thickness was calculated by the following equation: $$\frac{\partial h}{\partial t} = \left( {\frac{\partial h}{\partial t}} \right)_{g} - \left( {\frac{\partial h}{\partial t}} \right)_{w} ,$$ The flowchart of the simulation algorithm is shown in Fig. 3. Input are the mechanical properties, temperature, macroscale geometry, and roughness. Initially, the tribo-film does not exist (H = 0). After the calculation of contact pressures and shear stresses, the local growth of the tribo-film and subsequent wear are obtained from Eqs. (3) and (4), respectively. The tribo-film thickness is then updated, and the average tribo-film thickness is calculated. The average thickness is then used as a thickness of the (uniform) layer in the model. The hardness is recalculated based on the updated tribo-film thickness and the algorithm continues. Flowchart of the tribo-film growth calculation 3 Results and Discussion 3.1 Microscale Contact The model was tested using the measurements of the tribo-film growth under an AFM tip from Gosvami et al. [5]. The contact consists of a DLC-coated (15 nm thick) silicon AFM tip with a radius of 53 nm and an iron-coated (10 nm thick) silicon substrate. First, the tribo-film growth at 600 nN was calculated. By performing an elastic simulation with a layered model, it was found that the coatings are thick enough to neglect the influence of the underlying silicon. Therefore, only the iron film—DLC AFM tip contact—was taken into account. The Young modulus and hardness of the tribo-film were reported to increase with the indentation depth in the range of 15–130 GPa and 2–4 GPa, respectively [33]. The lowest values of the Young modulus and hardness were used in the simulations, as discussed below. It was assumed that the tribo-film grows only on the substrate (iron-coated), as schematically shown in Fig. 4. Schematic diagram of the contact model A semi-analytical contact model developed by Bosman and Schipper [22] indicated that the indentation depth of the tribo-film under considered conditions (assuming 2 GPa hardness) would be less than 0.5 nm. Since the plastic penetration depth was found to be less than 0.5 nm, the hardness of the substrate will not affect the contact pressure and the maximum contact pressure will be determined by the hardness of the tribo-film only. For the current simulations, the hardness of the tribo-film was assumed to be constant. On the other hand, the Young modulus of the substrate does influence the contact pressure even with thick films. The simulation of the tribo-film growth was therefore performed using the layered elastic-fully plastic contact model with a constant hardness of 2 GPa, see Fig. 4. Initially, wear of the tribo-film was not taken into account and the growth was compared to the data published in Ref. [5]. This was done to test the hypothesis in [5] that the self-limiting behavior is based on the stress relaxation by the tribo-film, see Fig. 5a. It should be mentioned that the initial phase of the tribo-film growth is relatively slow, as discussed in detail in Ref. [5]. This phase of growth is not well understood but can also be regarded as not relevant for engineering problems. In practice, it can be assumed that a tribo-film already exists after running-in, i.e., when mild wear starts. The simulation data could therefore be shifted in time to correspond to the onset of the fast growth phase. As can be seen in Fig. 5a, the calculations correspond well with the experimental results, except for the last part of the curve where a significant deviation occurs. The mean contact pressure evolution is shown in Fig. 5b. Initially, a high contact pressure is developed due to the high elastic modulus of the iron substrate. With a rapid growth of the tribo-film with relatively low Young's modulus and hardness, the pressure drops. However, the level of this contact pressure remains sufficiently high for steady growth of the tribo-film. It can thus be stated that the stress relaxation is not sufficient to limit the growth and wear should be included. Contact between AFM tip and steel substrate. a Simulation and experimental data (data reconstructed from [5]) at 600 nN load of the tribo-film growth neglecting tribo-film wear, b corresponding mean pressure as a function of the tribo-film thickness This was done in the second calculation, shown in Fig. 6a, where the growth measurement data were used to find the values of \(\Delta U_{\text{act}}\) and \(\Delta V_{\text{act}}\). The rapid growth and the steady-state tribo-film thickness can be well predicted by the model. However, an important feature, i.e., an overshoot of the tribo-film thickness in the experimental data, cannot be reproduced. This overshoot is frequently observed in macroscale contacts as well [1, 2, 21]. The fact that it occurs on both micro- and macroscale makes is an intriguing feature of the tribo-film growth, which needs to be studied further. Same case as from Fig. 4. The simulations now include both tribo-film growth and wear using a Eq. (4), b using a time lag, Eq. (6), with \(t_{\text{lag}} = 7\) The exact reason of the overshoot is not clear. Fujita and Spikes [42] reported an experimental investigation of this phenomenon. They first let the tribo-film grow until it stabilized after an overshoot. At this point, they added fresh ZDDP. They repeated this experiment several times, and every time after the addition of fresh ZDDP, they observed an overshoot of the tribo-film thickness. Interestingly, after each overshoot, the tribo-film stabilized toward the same value. From this, it can be concluded that the overshoot is related to certain processes within fresh ZDDP during rubbing. In order to include this behavior into the model, a delay between growth and removal was introduced in the wear equation: $$\left( {\frac{\partial h}{\partial t}} \right)_{w} = \alpha h\left( {t - t_{lag} } \right),$$ This delay time T LAG can then be considered as a characteristic time for stabilization of the growth rate. The results obtained by using Eq. (6) are shown in Fig. 6b. It can be seen that almost the full curve, with the exception of the initial nucleation phase, can be described by the model. The parameters that were used in the simulations are given in Table 2. It should also be noticed that since the overshoot is not always observed and it does not influence the steady-state thickness of the tribo-film, Eq. (4) can be used to calculate wear of the tribo-film. Next, the load was decreased to 340 nN. Figure 7b shows the result of the simulation using exactly the same input parameters as for the higher load case (see Table 2). The figure also shows the experimental results. The experiment stopped before the point of self-limitation was reached. However, it can be seen that the linear growth of the tribo-film thickness is again well predicted. The simulation was also performed with \(t_{\text{lag}} = 0\) to confirm that the time lag is also needed here. The results are shown in Fig. 7a. Unfortunately, there was no more data available to further test the performance of the model. Contact between AFM tip and steel substrate simulation and experimental data (reconstructed approximately from [5]) at 340 nN load using a Eq. (4), b Eq. (6) with \(t_{\text{lag}} = 3000\varvec{ }\;{\text{cycles}}\) 3.2 Macroscale Contact with Roughness The model was further applied to a macroscale contact consisting of two rough steel surfaces. The physical properties of the steel, radius of curvature of the contact, and roughness are listed in Table 1. The applied load was 60 N, entrainment speed was 0.1 m/s, and the slide-to-roll ratio was 5 %. The growth of the tribo-film was recorded at 60, 80, and 100 °C. The variation of the effective Young modulus due to the presence of the tribo-film was found to have a negligible effect on the growth rate. On the other hand, the influence of the hardness was found to be very high. Properties of the macroscale contact Disk, steel E, GPa \(\nu\) H, GPa R, mm R q , nm The data from Ghanbarzadeh et al. [25] were used to fit the model parameters \(\Delta U_{\text{act}}\), \(\Delta V_{\text{act}}\), and \(\alpha\). Equation (4) was used for wear simulation. The simulation and test data are shown in Fig. 8a. A reasonable agreement at various temperatures can be obtained. It should be noted that including the stress dependency in the Arrhenius growth equation was a prerequisite to get a good fit of the three curves using the same constants. Figure 8a shows a saturation of the tribo-film thickness growth at higher temperatures, an effect that will not occur if only Arrhenius behavior is assumed. The stress-activated equation with the described mechanical model made it possible to also capture this effect. a Comparison of simulation and experimental data [25], b evolution of the wear rate with temperature The set of optimum parameters for the macroscopic steel–steel contact is given in Table 2. The values of \(\Delta U_{\text{act}}\) and \(\Delta V_{\text{act}}\) are close to the ones reported by Gosvami et al. [5], see Table 2. This suggests that the proposed model is effective for a wide range of contact conditions, emphasizing the validity of the model. Parameters used in the simulations and found in the literature Parameter\case Microscale contact, Gosvami et al. [5] Microscale contact (AFM tip against steel) Macroscale contact \(\Delta U_{\text{act}} , \;{\text{eV}}\) \(\Delta V_{\text{act}} ,\,{\text{\AA}}^{3}\) \(\alpha ,\;\left[ {\frac{1}{{{\text{time}}\,{\text{unit}}}}} \right]\) \(3 \times 10^{ - 4}\) \(6.9 \times 10^{ - 4}\) Wear of the tribo-film as a function of time is shown in Fig. 8b. As expected, the wear rate is larger for thicker films. It should be noted that the wear of the tribo-film leads to wear of the substrate material as well, in this case iron. This is caused by diffusion of the iron atoms into the tribo-film [9], which are subsequently removed if the tribo-film wears. Wear is measured by the removal of the substrate material (steel) and not by the removal of part of the tribo-film! The loss of substrate material due to tribo-film wear is calculated from the wear volume and the concentration of the substrate material atoms in the tribo-film. The concentration in the tribo-film will be highest close to the 'interface with the substrate material' and lower close to the 'free surface.' The loss of substrate material due to tribo-film wear will therefore decrease with increasing film thickness [9, 36]. The following relation was used to calculate the concentration as a function of the tribo-film thickness \(h\): $$C\left( h \right) = e^{{ - C_{1} h}} ,$$ where \(C\left( h \right)\) is the concentration of substrate material, C 1 is an unknown constant. The wear of the substrate material can then be calculated from: $$h_{w}^{m} = \mathop \smallint \limits_{0}^{t} C\left( h \right) \cdot \left( {\frac{\partial h}{\partial t}} \right)_{w} {\text{d}}t,$$ where \(h_{w}^{m}\) is the accumulated wear depth (wear of the substrate material). The constant \(C_{1}\) needs to be determined experimentally. In this case, the data from Ghanbarzadeh et al. [25] were used. They measured the wear depth after 45 and 120 min, at 60 and 100 °C. By taking the difference in wear depth between 120 and 45 min, the running-in period was excluded, see Table 3. Running-in was excluded here, since only mild tribo-chemical wear data were needed. As it can be seen in Table 3, a remarkable agreement of the simulation and experiment was achieved. Wear depth data as \(C_{1} = 1.24 \times 10^{7}\) derived from the measurements of Ghanbarzadeh et al. [25] and model calculations T, °C \(\Delta = h_{w\;120\;\hbox{min} }^{m} - h_{w\;45\;\hbox{min} }^{m}\),measured, nm \(\Delta = h_{w\;120\;\hbox{min} }^{m} - h_{w\;45\;\hbox{min} }^{m}\), calculated, nm The evolution of the substrate material wear is shown in Fig. 9a. Figure 8 shows that the film thickness increases with increasing temperature. Hence, the substrate material concentration will be decreasing with increasing temperature, see Fig. 9b. The substrate material loss (wear) will therefore also decrease with increasing temperature. The highest wear of the substrate material is therefore found at 60 °C. a Wear of a substrate material (wear depth), b concentration of the substrate material in a tribo-film A new mechano-chemical model was developed for the calculation of the tribo-film evolution. It was shown that the use of a stress-activated Arrhenius model makes it possible to calculate the tribo-film development up to its steady-state value. In addition, a mild tribo-chemical wear model was introduced based on the growth and wear of tribo-films. The models were validated using data from the literature. The results support the theory of stress- and temperature-driven growth and self-limitation of tribo-films. Inputs for the model are, in addition to the traditional mechanical properties of the substrate materials, the constants in the stress-activated Arrhenius equations and the constants \(\alpha\) and C 1 in the wear equations. These constants depend on the type of additive, base oil, and concentration of ZDDP molecules [21, 43]. It is important to notice that these constants may be independent of load or temperature. This implies that the number of experiments that are required to calibrate the model is relatively small, which makes it relatively easy to apply the model to engineering problems. The test data that could be found for the validation of the model was limited. It is recommended to further perform tribo-film thickness evolution measurements at various stresses and temperatures at macroscale to advance the model. This research was carried out under Project Number M21.1.11450 in the framework of the Research Program of the Materials innovation institute M2i. Spikes, H.: The history and mechanisms of zddp. Tribol. Lett. 17(3), 469–489 (2004). doi: 10.1023/B:TRIL.0000044495.26882.b5 CrossRefGoogle Scholar Topolovec-Miklozic, K., Forbus, T.R., Spikes, H.A.: Film thickness and roughness of ZDDP antiwear films. Tribol. Lett. 26(2), 161–171 (2007). doi: 10.1007/s11249-006-9189-2 CrossRefGoogle Scholar Nicholls, M.A., Norton, P.R., Bancroft, G.M., Kasrai, M., Stasio, G.D., Wiese, L.M.: Spatially resolved nanoscale chemical and mechanical characterization of ZDDP antiwear films on aluminum–silicon alloys under cylinder/bore wear conditions. Tribol. Lett. 18(3), 261–278 (2005). doi: 10.1007/s11249-004-2752-9 CrossRefGoogle Scholar Vengudusamy, B., Green, J.H., Lamb, G.D., Spikes, H.A.: Tribological properties of tribofilms formed from ZDDP in DLC/DLC and DLC/steel contacts. Tribol. Int. 44(2), 165–174 (2011). doi: 10.1016/j.triboint.2010.10.023 CrossRefGoogle Scholar Gosvami, N.N., Bares, J.A., Mangolini, F., Konicek, A.R., Yablon, D.G., Carpick, R.W.: Mechanisms of antiwear tribofilm growth revealed in situ by single-asperity sliding contacts. Science 348(6230), 102–106 (2015)CrossRefGoogle Scholar Zhang, J., Spikes, H.: On the mechanism of ZDDP antiwear film formation. Tribol. Lett. 63(2), 1–15 (2016). doi: 10.1007/s11249-016-0706-7 CrossRefGoogle Scholar Mingwu, B., Xushou, Z., Shangkui, Q.: Tribological properties of silicon nitride ceramics coated with molybdenum films under boundary lubrication. Wear 169(2), 181–187 (1993). doi: 10.1016/0043-1648(93)90296-X CrossRefGoogle Scholar Vengudusamy, B., Green, J.H., Lamb, G.D., Spikes, H.A.: Durability of ZDDP tribofilms formed in DLC/DLC contacts. Tribol. Lett. 51(3), 469–478 (2013). doi: 10.1007/s11249-013-0185-z CrossRefGoogle Scholar Pasaribu, H.R., Lugt, P.M.: The composition of reaction layers on rolling bearings lubricated with gear oils and its correlation with rolling bearing performance. Tribol. Trans. 55(3), 351–356 (2012). doi: 10.1080/10402004.2011.629403 CrossRefGoogle Scholar Minfray, C., Le Mogne, T., Martin, J.-M., Onodera, T., Nara, S., Takahashi, S., Tsuboi, H., Koyama, M., Endou, A., Takaba, H., Kubo, M., Del Carpio, C.A., Miyamoto, A.: Experimental and molecular dynamics simulations of tribochemical reactions with ZDDP: zinc phosphate-iron oxide reaction. Tribol. Trans. 51(5), 589–601 (2008). doi: 10.1080/10402000802011737 CrossRefGoogle Scholar Bosman, R., Schipper, D.J.: Running-in of systems protected by additive rich oils. Tribol. Lett. 41, 263–282 (2011)CrossRefGoogle Scholar Lin, Y.C., So, H.: Limitations on use of ZDDP as an antiwear additive in boundary lubrication. Tribol. Int. 37(1), 25–33 (2004). doi: 10.1016/S0301-679X(03)00111-7 CrossRefGoogle Scholar Martin, J.M.: Antiwear mechanisms of zinc dithiophosphate: a chemical hardness approach. Tribol. Lett. 6(1), 1–8 (1999). doi: 10.1023/a:1019191019134 CrossRefGoogle Scholar Martin, J.M., Onodera, T., Minfray, C., Dassenoy, F., Miyamoto, A.: The origin of anti-wear chemistry of ZDDP. Faraday Discuss. 156, 311–323 (2012). doi: 10.1039/C2FD00126H CrossRefGoogle Scholar Qu, J., Meyer Iii, H.M., Cai, Z.-B., Ma, C., Luo, H.: Characterization of ZDDP and ionic liquid tribofilms on non-metallic coatings providing insights of tribofilm formation mechanisms. Wear 332–333, 1273–1285 (2015). doi: 10.1016/j.wear.2015.01.076 CrossRefGoogle Scholar Mosey, N.J., Müser, M.H., Woo, T.K.: Molecular mechanisms for the functionality of lubricant additives. Science 307(5715), 1612–1615 (2005). doi: 10.1126/science.1107895 CrossRefGoogle Scholar Fujita, H., Spikes, H.A.: The formation of zinc dithiophosphate antiwear films. Proc. Inst. Mech. Eng. J J Eng. Tribol. 218(4), 265–278 (2004). doi: 10.1243/1350650041762677 CrossRefGoogle Scholar Hänggi, P., Talkner, P., Borkovec, M.: Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys. 62(2), 251–341 (1990)CrossRefGoogle Scholar Bulgarevich, S.B., Boiko, M.V., Kolesnikov, V.I., Korets, K.E.: Population of transition states of triboactivated chemical processes. J. Frict. Wear 31(4), 288–293 (2010). doi: 10.3103/s1068366610040070 CrossRefGoogle Scholar Bulgarevich, S.B., Boiko, M.V., Kolesnikov, V.I., Feizova, V.A.: Thermodynamic and kinetic analyses of probable chemical reactions in the tribocontact zone and the effect of heavy pressure on evolution of adsorption processes. J. Frict. Wear 32(4), 301–309 (2011). doi: 10.3103/s1068366611040027 CrossRefGoogle Scholar Naveira-Suarez, A., Tomala, A., Grahn, M., Zaccheddu, M., Pasaribu, R., Larsson, R.: The influence of base oil polarity and slide–roll ratio on additive-derived reaction layer formation. Proc. Inst. Mech. Eng. J J. Eng. Tribol. 225(7), 565–576 (2011). doi: 10.1177/1350650111405115 CrossRefGoogle Scholar Bosman, R., Hol, J., Schipper, D.J.: Running in of metallic surfaces in the boundary lubrication regime. Wear 271(7–8), 1134–1146 (2011)CrossRefGoogle Scholar Andersson, J., Larsson, R., Almqvist, A., Grahn, M., Minami, I.: Semi-deterministic chemo-mechanical model of boundary lubrication. Faraday Discuss. 156, 343–360 (2012). (discussion 413–334) CrossRefGoogle Scholar So, H., Lin, Y.C.: The theory of antiwear for ZDDP at elevated temperature in boundary lubrication condition. Wear 177(2), 105–115 (1994). doi: 10.1016/0043-1648(94)90236-4 CrossRefGoogle Scholar Ghanbarzadeh, A., Parsaeian, P., Morina, A., Wilson, M.C.T., van Eijk, M.C.P., Nedelcu, I., Dowson, D., Neville, A.: A semi-deterministic wear model considering the effect of zinc dialkyl dithiophosphate tribofilm. Tribol. Lett. 61(1), 1–15 (2015). doi: 10.1007/s11249-015-0629-8 Google Scholar Akchurin, A., Bosman, R.: Tribology Simulator (software). http://www.tribonet.org/cmdownloads/tribology-simulator/ Akchurin, A., Bosman, R., Lugt, P.M., van Drogen, M.: On a model for the prediction of the friction coefficient in mixed lubrication based on a load-sharing concept. Tribol. Lett. 59(1), 19–30 (2015)CrossRefGoogle Scholar Timoshenko, S.P., Goodier, J.N. (eds.): Theory of Elasticity, 3rd edn. McGraw-Hill, New York (1970)Google Scholar Liu, S.: Thermomechanical contact analysis of rough bodies. Ph.D. thesis, Northwestern University (2001)Google Scholar Wang, Z.-J., Wang, W.-Z., Wang, H., Zhu, D., Hu, Y.-Z.: Partial slip contact analysis on three-dimensional elastic layered half space. J. Tribol. 132(2), 1–12 (2012)Google Scholar Liu, S., Wang, Q.: Studying contact stress fields caused by surface tractions with a discrete convolution and fast Fourier transform algorithm. J. Tribol. 124(1), 36–45 (2002)CrossRefGoogle Scholar Roshan, R., Priest, M., Neville, A., Morina, A., Xia, X., Warrens, C.P., Payne, M.J.: Friction modelling in boundary lubrication considering the effect of MoDTC and ZDDP in engine oils. Tribol. Online 6(7), 301–310 (2011). doi: 10.2474/trol.6.301 CrossRefGoogle Scholar Demmou, K., Bec, S., Loubet, J.L., Martin, J.M.: Temperature effects on mechanical properties of zinc dithiophosphate tribofilms. Tribol. Int. 39, 1558–1563 (2006)CrossRefGoogle Scholar Pereira, G., Munoz-Paniagua, D., Lachenwitzer, A., Kasrai, M., Norton, P.R., Capehart, T.W., Perry, T.A., Cheng, Y.-T.: A variable temperature mechanical analysis of ZDDP-derived antiwear films formed on 52100 steel. Wear 262(3–4), 461–470 (2007). doi: 10.1016/j.wear.2006.06.016 CrossRefGoogle Scholar Tse, J.S., Song, Y., Liu, Z.: Effects of temperature and pressure on ZDDP. Tribol. Lett. 28(1), 45–49 (2007). doi: 10.1007/s11249-007-9246-5 CrossRefGoogle Scholar Bosman, R., Schipper, D.J.: Mild wear prediction of boundary-lubricated contacts. Tribol. Lett. 42(2), 169–178 (2011). doi: 10.1007/s11249-011-9760-3 CrossRefGoogle Scholar Bosman, R., de Rooij, M.B.: Transient thermal effects and heat partition in sliding contacts. J. Tribol. 132, 021401 (2010). doi: 10.1115/1.4000693 CrossRefGoogle Scholar Archard, J.F.: Contact and rubbing of flat surfaces. J. Appl. Phys. 24(8), 981–988 (1953). doi: 10.1063/1.1721448 CrossRefGoogle Scholar Gotsmann, B., Lantz, M.A.: Atomistic wear in a single asperity sliding contact. Phys. Rev. Lett. 101(12), 125501 (2008)CrossRefGoogle Scholar Bhaskaran, H., Gotsmann, B., Sebastian, A., Drechsler, U., Lantz, M.A., Despont, M., Jaroenapibal, P., Carpick, R.W., Chen, Y., Sridharan, K.: Ultralow nanoscale wear through atom-by-atom attrition in silicon-containing diamond-like carbon. Nat. Nanotechnol. 5(3), 181–185 (2010). doi:http://www.nature.com/nnano/journal/v5/n3/suppinfo/nnano.2010.3_S1.html Schirmeisen, A.: Wear: one atom after the other. Nat. Nanotechnol. 8(2), 81–82 (2013)CrossRefGoogle Scholar Fujita, H., Spikes, H.A.: Study of zinc dialkyldithiophosphate antiwear film formation and removal processes, part II: kinetic model. Tribol. Trans. 48(4), 567–575 (2005). doi: 10.1080/05698190500385187 CrossRefGoogle Scholar Fujita, H., Glovnea, R.P., Spikes, H.A.: Study of zinc dialkydithiophosphate antiwear film formation and removal processes, part I experimental. Tribol. Trans. 48(4), 558–566 (2005). doi: 10.1080/05698190500385211 CrossRefGoogle Scholar 1.Materials innovation Institute (M2i)DelftThe Netherlands 2.Department of Engineering Technology, Laboratory for Surface Technology and TribologyUniversity of TwenteEnschedeThe Netherlands Akchurin, A. & Bosman, R. Tribol Lett (2017) 65: 59. https://doi.org/10.1007/s11249-017-0842-8 Received 18 November 2016 Accepted 20 March 2017 First Online 31 March 2017 Publisher Name Springer US
CommonCrawl
Entropy of black hole A line from one of the answers on a different question got me thinking: The simplest way to see this is probably that a black hole has a much higher entropy than a star or even another type of stellar remnant of even vaguely similar mass and so there simply could not exist a spontaneous process by which a black hole develops back into a star. Now, I agree a black hole turning into a star seems far-fetched since it's a one-way trip (like you can't recover a block of sugar from a glass of water to that exact form). But as far as I know entropy is the amount of disorder. A black hole is denser than a star. For a density that high, I assume a certain amount of order (inverse entropy?) is required. It's an enormous amount of mass in a small amount of space, keeping itself together. Sounds to me like a system, not like a random collection of mass. How can the amount of order necessary for such dense objects as black holes be lower than that of the star they originate from? star black-hole MastMast But as far as I know entropy is the amount of disorder. Entropy is a measure of the number of possible microscopic states consistent with an observed macroscopic state1, $S = k_\text{B}\ln N$. Fundamentally it has nothing to do with disorder, although as an analogy it sometimes works. For example, in simple situations like an $n$ point-particle gas in a box: there many more ways to put point-particles in a box in disorderly manner than an orderly one. However, the exact the opposite may be true if they have a positive size and the box is crowded enough. Overall, disorder is just a bad analogy. 1 Even that's not quite true, but it's better than disorder. Specifically, it's a simplification under the assumption that all microstates are equally likely. A black hole is denser than a star. For a density that high, I assume a certain amount of order (inverse entropy?) is required. If an object is crushed inside ideal box that isolates it and prevents any leaks to the outside, the crushed object still has information about what it was before. And an event horizon is about as an ideal box as there can be. Classically, black holes have no hair, meaning that the spacetime of an isolated black hole is characterized by mass, angular momentum, and electric charge. So there are two possible responses to this: either the black hole really has no structure other than those few parameters, in which case the information is destroyed, or it does have structure that's just not externally observable classically. Thus, if information is not destroyed, we should expect the number of microstates of a black hole to be huge simply because there's a huge number of ways to produce a black hole. Roughly, at least the number of microstates of possible collapsing star remnants of the same mass, angular momentum, and charge (though this is idealized because a realistic collapsing process sheds a lot). For a density that high, I assume a certain amount of order (inverse entropy?) is required. Quite the opposite; black hole are the most entropic objects for their size. In the early 1970s, physicists have noticed an interesting analogies between how black holes behave and the laws of thermodynamics. Most relevantly here, the surface gravity $\kappa$ of a black hole is constant (paralleling zeroth law of thermodynamics) and the area $A$ of a black hole is classically nondecreasing (paralleling second law). This is extended further with analogies of the first and third laws of thermodynamics with $\kappa$ acting like temperature and $A$ as the entropy. The problem is that for this to be more than an analogy, black holes should radiate with temperature that's (some multiple of) their surface gravity. But they do; this is called Hawking radiation. So the area can shrink as long as there is a compensating entropy emitted to the outside: $$\delta\left(S_\text{outside} + A\frac{k_\text{B}c^3}{4\hbar G}\right)\geq 0\text{.}$$ Thus, semi-classically, the entropy of a black hole is proportional to its surface area. In natural units, it is simply $S_\text{BH} = A/4$, which is huge because Planck areas are very small. Thus, we know that in a semi-classical approximation, a black hole must radiate with temperature proportional to its surface gravity and entropy proportional to its area. It's natural to wonder the next step: if a black hole has all this entropy, where is the structure? How can it have so many possible microstates if it's classically just a vacuum? But going there takes us to the land of quantum gravity, which is not yet firmly established, and is outside the scope for astronomy. Stan LiouStan Liou Not the answer you're looking for? Browse other questions tagged star black-hole or ask your own question. Graduation of the Astronomy Stack Exchange Does a black hole become a normal star again? What type of energy is escaping from black-hole's poles? Star versus Black Hole A black hole that doesn't take in matter? Complex life in binary black hole - Sun(s) system Black hole Creation Does a kilonova leave a high mass remnant? Has a "white hole" theory been advanced to explain anomalous star formation in Sagittarius A? what would the mass and density of Sirius A be if the mass of Sirius B is roughly the same as the Earth?
CommonCrawl
Highly efficient three-dimensional solar evaporator for high salinity desalination by localized crystallization Three-dimensional open architecture enabling salt-rejection solar evaporators with boosted water production efficiency Kaijie Yang, Tingting Pan, … Yu Han Enhanced solar evaporation using a photo-thermal umbrella for wastewater management Akanksha K. Menon, Iwan Haechler, … Ravi S. Prasher Highly efficient evaporative cooling by all-day water evaporation using hierarchically porous biomass Jihun Choi, Hansol Lee, … Sangmin Jeon Ionic liquid enables highly efficient low temperature desalination by directional solvent extraction Jiaji Guo, Zachary D. Tucker, … Tengfei Luo All-day fresh water harvesting by microstructured hydrogel membranes Ye Shi, Ognjen Ilic, … Julia R. Greer Simultaneous production of fresh water and electricity via multistage solar photovoltaic membrane distillation Wenbin Wang, Yusuf Shi, … Peng Wang Passive solar high-yield seawater desalination by modular and low-cost distillation Eliodoro Chiavazzo, Matteo Morciano, … Pietro Asinari Nanofluidics for osmotic energy conversion Zhen Zhang, Liping Wen & Lei Jiang Designing a next generation solar crystallizer for real seawater brine treatment with zero liquid discharge Chenlin Zhang, Yusuf Shi, … Peng Wang Lei Wu ORCID: orcid.org/0000-0002-2600-63421 na1, Zhichao Dong ORCID: orcid.org/0000-0003-0729-57562 na1, Zheren Cai1,3, Turga Ganapathy4, Niocholas X. Fang4, Chuxin Li2, Cunlong Yu5, Yu Zhang1,3 & Yanlin Song1,3 Nanoscale materials Solar-driven water evaporation represents an environmentally benign method of water purification/desalination. However, the efficiency is limited by increased salt concentration and accumulation. Here, we propose an energy reutilizing strategy based on a bio-mimetic 3D structure. The spontaneously formed water film, with thickness inhomogeneity and temperature gradient, fully utilizes the input energy through Marangoni effect and results in localized salt crystallization. Solar-driven water evaporation rate of 2.63 kg m−2 h−1, with energy efficiency of >96% under one sun illumination and under high salinity (25 wt% NaCl), and water collecting rate of 1.72 kg m−2 h−1 are achieved in purifying natural seawater in a closed system. The crystalized salt freely stands on the 3D evaporator and can be easily removed. Additionally, energy efficiency and water evaporation are not influenced by salt accumulation thanks to an expanded water film inside the salt, indicating the potential for sustainable and practical applications. Facing the globally occurring water scarcity situation, solar-driven water evaporation or solar steam generation is considered as a promising technology for potential applications in desalination and clean water preparation1,2,3, as solar energy is the only energy input to purify brine or polluted water. Increasing the energy efficiency from sunlight to water evaporation endotherm, decreasing heat loss and inhibiting pollutant or salt blockage during solar steam generation are critical factors both in fundamental research and further practical implementation of the solar-driven interfacial evaporation system4. Based on these factors, several strategies have been proposed to enhance water evaporation and energy efficiency through developing photothermal materials with effective absorption ability5,6,7,8,9, minimizing heat loss and enhancing heat localization10,11,12,13,14, providing effect water/steam transport interface or increasing energy output pathway15,16,17,18,19,20 and increasing durability and antifouling property in extreme environment21,22,23,24. However, the effective utilization of input solar energy and converted heat under high salinity remains a challenge. Recently, the employment of hierarchically nanostructured gels has been shown to reduce the water evaporation enthalpy and lead to a high water evaporation rate under one sun25. Salt blockage can be inhibited by manually introducing salt-rejection pathways. Considering the salt condensation or the heat loss, the efficiency is still limited26,27. The increased salt concentration during the continuous evaporation process inevitably results in the crystallization or accumulation of salt on the surface of photothermal materials, which will decrease effective light absorption, block the steam generation and inhibit the implementation of the concept of solar steam generation in real-world application28,29. Progresses have been made in recent years: the contactless design can separate the absorber from the water interface, resulting in the contamination resistance for a long time30, and the edge crystallization can isolate the crystallized salt from the evaporator through gravity31. However, to improve the evaporation rate and energy efficiency for practical usage, there is still a long way to go. It is therefore worth further investigating how to increase the evaporation rate, energy efficiency and the sustainability of the photothermal materials under high salinity (Supplementary Table 1 and Supplementary Fig. 1a, b, which covers the current state in the solar desalination field with the variation of salinity). To achieve effective water evaporation, efficient water transport and vapor evaporate route for efficient thermal management in the solar-driven evaporation process is critical. The steam generation rate and the energy efficiency have almost been pushed to the upper limit for the current solar steam generation system based on a water/structure interface with homogeneous film thickness and temperature without varying the water evaporation enthalpy32,33. Here, we report an efficient energy reutilizing strategy to achieve high energy efficiency and water evaporation rate under high salinity. The system is based on interfacial water film inhomogeneity management through hierarchical water pathways based on a biomimetic 3D structure prepared from size-dependent resin refilling induced additive manufacturing. Ascribing from the position-related structure inhomogeneity of the 3D structure, the generated water film on the evaporator surface displays a thickness gradient. In addition, the input energy acquired by the biomimetic 3D evaporator system is related to the distance between the light source and the precise position, which results in the position-related energy utilization of the illumination. Cooperating with the position-related water evaporation on the biomimetic 3D evaporator induced by the structure inhomogeneity, the liquid film displays temperature gradient along the liquid film, indicating that the whole system unevenly utilizes energy. The simultaneously formed Marangoni flow leads to the water supplementation to the more vigorous evaporation site to enhance evaporation and energy efficiency, which finally leads to the energy reutilization property inside the water film. High water evaporation rate in darkness (1.17 kg m−2 h−1), high solar-driven water evaporation rate (2.63 kg m−2 h−1) and high energy efficiency (>96%) are achieved under one sun illumination with excellent stability even under high salinity. In addition, the liquid film thickness gradient and the position-related water evaporation along the liquid film also leads to the salt concentration gradient and localized salt crystallization feature on the 3D evaporator. Salt crystallizes at the apex liquid film on the structure with a thin layer of liquid in between, which endows the evaporator with salt free-standing and easy-removing property, proving its capability for sustainable utilization. Furthermore, the batch water purification rate in a closed system can reach 1.72 kg m−2 h−1 for continuously purifying the natural seawater sample, indicating its potential for practical applications in the future. Biomimetic design of the 3D evaporator We designed the solar evaporator structure inspired by the super liquid transportation property of the asymmetric capillary ratchet of the bird beak (Fig. 1a)34 and the pitcher plant peristome surface (Fig. 1b)35,36,37. As shown in Fig. 1c, the asymmetric grooves and microcavity arrays with a dimensional gradient along each groove form the 3D structure with a height-to-diameter (H/D) ratio of 0.7 (Supplementary Fig. 2a). Size-dependent resin refilling induced additive manufacturing based on a self-made Digital Light Processing (DLP) continuous 3D printing system is employed to fabricate the 3D structures with surface distributed micropores38 (Fig. 1d, Supplementary Note 1). The formation of composite resin is revealed in Supplementary Fig. 3a, where carbon nanotubes (CNTs, Supplementary Fig. 3b) and sodium citrate (Supplementary Fig. 3c) are added in the self-made UV curable resin. CNTs are chosen as the photothermal material39 and sodium citrate particles40 are employed as the surface distributed pore producer for the 3D evaporator. The sodium citrate particles are not able to flow along with the refilling resin during the continuous printing process, as the slicing thickness (5 μm) is much smaller than the particle dimension (Fig. 1d inset). Particles are solidified only on the surface of the cured structure instead. After removing the sodium citrate on the surface, micropores are thus achieved only on the 3D structure surface. Microcomputed tomography (Micro-CT) images shown in Fig. 1e–g characterize the inner and surface morphology of the biomimetic 3D evaporator, proving the successful preparation as designed. Scanning electron microscopy (SEM) further shows that the 3D evaporator exhibits randomly distributed micropores only on the surface (Fig. 1h–j). The composite plane film prepared from the same manufacturing process can absorb over 90% of input light (Supplementary Fig. 4), which is suitable as the material for preparing 3D evaporator. Fig. 1: Additive manufacturing and characterization of the biomimetic 3D solar evaporator. a–c Design of the biomimetic 3D evaporator inspired by the super liquid transportation property of the asymmetric capillary ratchet of the bird beak and the peristome surface of the pitcher plant. Through combing the asymmetric capillary rachet induced water transportation property and the microcavity array induced water directional transportation property on the biomiemtic 3D evaporator, water film that unidirectionally spreads on the 3D evaporator displays thickness gradient, which leads to enhanced solar-dirven water evaporation and localized salt crystallization. a The super liquid transportation property of the asymmetric capillary ratchet of the bird beak. b The super liquid transportation property of the peristome surface of the pitcher plant. c The inhomogeneous water film induced localized salt crystallization on the biomimetic 3D evaporator and its application in solar-driven water evaporation enhancement. d Schematic configuration of size-dependent resin refilling induced additive manufacturing based on the continuous DLP 3D printing system. Inset is the scheme of the size-dependent particle refilling process where particles with a dimension larger than the slicing thickness cannot flow along with the refilling resin and are solidified only on the surface of the printed structure. e–j Characterization of the biomimetic 3D evaporator. e Side view reconstructed Micro-CT image of the biomimetic 3D evaporator. f Top-angled cross-sectional Micro-CT image of the biomimetic 3D evaporator. g Side cross-sectional Micro-CT image of the biomimetic 3D evaporator that displays the microcavities with dimensional gradient. h Top-angled cross-sectional SEM image of the biomimetic 3D evaporator. i Enlarged view of (h). j Side cross-sectional SEM image that demonstrates the micropores only distribute on the surface of the biomimetic 3D evaporator. Generation and characterization of liquid film A high-speed camera is utilized to monitor the water movement on the biomimetic 3D evaporator surface (Fig. 2a–d). Time sequence images exhibit ultra-fast water spreading process (100 ms): water precursor moves upwardly along grooves of microcavity arrays and spreads perpendicularly to each groove simultaneously, forming a continuous liquid film covering the whole evaporator (Supplementary Movie 1). After the spreading process of 100 ms, the water/structure interface is generated. In contrast, without the surface distributed micropores on the evaporator surface (without the introduction of sodium citrate), the time needed for water/structure interface generation is increased to 2 s (Supplementary Fig. 5). Therefore, the asymmetric grooves and gradient microcavity arrays can facilitate the water suction from bulk water to the evaporator surface, while the surface distributed micropores can enhance the suctioned water spreading across the asymmetric grooves. It should be noted that the water upward moving velocity on the biomimetic 3D structure based on the microcavity induced water continuous and inward liquid transportation, is much faster than that on porous filter paper originated from porosity induced capillary wicking (Supplementary Fig. 6). To demonstrate the effective water coverage on the biomimetic structure, the 3D wetting state of the structure is characterized through microcomputed tomography (Micro-CT), with the bottom of the 3D structure immersing into the liquid bath (Fig. 2e) during the X-ray reconstruction process. As shown in Fig. 2f, the liquid is trapped in the microcavities with a thin layer of liquid on the sidewall of the 3D structure. Due to the asymmetric groove and the microcavity dimensional gradient, a 3D water film with thickness inhomogeneity along the sidewall is spontaneously formed, where the apex liquid film (~15 μm the thinnest) is thinner than the bottom liquid film (~1500 μm the thickest), as displayed in Fig. 2g, h. Fig. 2: Inhomogeneous water film induced solar-driven water evaporation enhancement. a–d Time sequence of optical captures displaying the ultra-fast water upward spreading process on the biomimetic 3D evaporator surface. Scale bars are 2 mm. e–h 3D reconstruction of the equilibrium state of the biomimetic 3D evaporator in the wet state through Micro-CT from different viewing points. Panels (e), (f), and (g) are the scheme of the Micro-CT configuration, side view Micro-CT, and cross-sectional Micro-CT images of the reconstructed wet sample, respectively. Ethylene glycol (EG) with high boiling poit is utilized as the test liquid to reduce the influence of the liquid evaporation during Micro-CT characterization process. The red dot line in (g) represents the contact line between liquid and the microcavity structure of the biomimetic 3D evaporator. h Micro-CT image of the solely reconstructed hierarchical water pathway with thickness gradient generated on the biomimetic 3D evaporator surface. i The mass change of the water on the biomimetic 3D evaporator under one sun illumination (1 kW m−2), with pure water and 2D plane as controls. Red, blue, and black lines represent mass-change curves of the biomimetic 3D structure, the 2D plane, and pure water under one sun illumination, respectively. j The mass change of the water on the biomimetic 3D evaporator in darkness, with pure water and 2D plane as controls. Red, blue, and black lines represent mass-change curves of the biomimetic 3D evaporator, the 2D plane, and pure water in darkness, respectively. Solar steam enhancement We further investigate the solar-driven water evaporation performance through floating the evaporator on the water surface. The evaporation rates are examined by recording the mass change under one sun illumination or in darkness. For comparison, a 2D plane (diameter of 11.0 mm and thickness of 1.0 mm) is prepared from the same composite resin through additive manufacture. Typical curves of time-dependent mass change for pure water, the 2D plane, and the biomimetic 3D evaporator are measured and plotted through a self-made apparatus (Supplementary Fig. 7, U-typed tube). As shown in Fig. 2i, under one sun illumination, the water evaporation rate increases from ~0.41 kg m−2 h−1 for pure water to ~1.07 kg m−2 h−1 for the 2D plane, and to ~2.28 kg m−2 h−1 for the biomimetic 3D evaporator. In darkness, the biomimetic 3D evaporator can also increase water evaporation rate, where evaporation rates are ~0.11 kg m−2 h−1 for pure water and ~0.84 kg m−2 h−1 for the biomimetic 3D evaporator (Fig. 2j), while the 2D plane can only slightly increase the water evaporation rate to ~0.20 kg m−2 h−1. Furthermore, we also prepare a control 3D evaporator with the same 3D morphology but without the addition of photothermal material CNTs, and investigated its water evaporation rate in darkness (Supplementary Fig. 8a). The equivalence of the evaporation rates of the 3D evaporators with and without photothermal materials indicates that the introduction of the evaporator with biomimetic 3D morphology can greatly enhance water evaporation in darkness. To investigate the mechanism of the solar-driven evaporation enhancement of the biomimetic 3D evaporator, an infrared camera is employed to monitor the temperature evolution of the water film formation process and the dynamic state of the solar steam generation process. Figure 3a reveals the temperature mapping of the biomimetic 3D evaporator in dry state without illumination, where it possesses a homogeneous temperature profile as there is no energy input. Once under illumination, the whole structure can be heated with the increasing of time (Fig. 3b). Temperature inhomogeneity along the dry 3D evaporator sidewall is formed, where the temperature of the apex structure exceeds that of the bottom structure during the whole illumination process. As light is perpendicularly illuminated to the projected area of the 3D structure, the energy that the dry 3D structure can acquire from the illumination differentiates along the sidewall ascribing from the conical morphology. With a closer position from the light source, higher energy intensity can be acquired at the apex. Cooperating with a larger specific surface area at the apex, it will lead to a higher temperature of the apex structure. In other words, the energy that the structure can absorb is position-related, i.e., the position-related utilization feature of the input energy. The analyses agree well with the experimental data measured by the infrared camera (Fig. 3b), which further proves our explanation on the temperature gradient generation on the dry structure. Therefore, the temperatures at the apex and at the bottom are selected as representatives and are monitored vs. time (Supplementary Fig. 9). Fig. 3: Mechanism of the solar steam enhancement on the biomimetic 3D solar evaporator. a Infrared image of the biomimetic 3D evaporator in dry state in darkness. b, c Time sequence of infrared captures during the water upward spreading process. d Infrared image of the dynamic equilibrium state of the solar steam generation process. e Temperature profiles along the biomimetic 3D evaporator surface (red line) under one sun illumination, with 3D columnar structure (orange line), 2D plane (blue line), and pure water (black line) as controls. Inset is the equilibrium infrared image on the 3D columnar structure under one sun illumination. Temperature range is 15–35 °C. f Numerical simulation result of the steam diffusion flux at the water/steam interface under one sun illumination. g Temperature profiles along the biomimetic 3D structure surface (red line) in darkness, with 3D columnar structure (orange line), 2D plane (blue line), and pure water (black line) as controls. h Numerical simulation result of the steam diffusion flux at the water/steam interface in darkness. i The mass change and energy efficiency of water on the biomimetic 3D evaporators with different H/D ratios. Orange, black, and blue lines represent the water evaporation rate in darkness, the solar steam generation rate under one sun illumination, and the energy efficiency under one sun illumination, respectively. The error bars in the evaporation rate result from errors in the mass-change measurements. The error bars in the efficiency values resulted from errors in the measurement of solar illumination power, the evaporation rate and the interface temperature. Each error bar represents the deviation from at least five data points. j Temperature differences between the bottom and the apex surface temperature with the variation of H/D ratios. Black and orange lines represent the temperature difference under one sun illumination and in darkness, respectively. Insets are equilibrium infrared images on biomimetic 3D evaporators with different H/D ratios under one sun illumination. The error bars resulted from errors in the measurement of the surface temperature. Each error bar represents the deviation from at least five data points. Floating the biomimetic 3D evaporator on a water surface, as Fig. 3c, d reveals, the self-climbed water coverage leads to surface temperature distribution variation. If without considering the evaporation process, the apex liquid film with smaller thickness is much easier to maintain a higher temperature than the thicker liquid film at the bottom. As water evaporation cannot be ignored, a thinner liquid film spreading at the higher temperature apex structure means a higher tendency to evaporate, indicating the position-related water evaporation phenomenon along the liquid film. Numerical simulation is further employed to prove the existence of the position-related water evaporation phenomenon, the result of which is consistence with the above analysis, where the steam diffusion flux at the apex is larger than that at the bottom under one sun illumination as shown in Fig. 3f. Competition process thus exists in between the position-related energy absorption and transfer, and the position-related water evaporation. Infrared camera image reflects the final competition result of the surface temperature profile of the evaporator, as shown in the red line in Fig. 3e, where an inversion of the temperature gradient is achieved on the sidewall after covering the liquid film and reaching the equilibrium state. The temperature of the apex liquid film (blue line in Supplementary Fig. 9) is much lower than the bottom (black line in Supplementary Fig. 9), indicating that water evaporation dominates which takes heat away and decreases the interfacial temperature at the apex, while the energy absorption and transfer process dominate at the bottom. It is worth mentioning that the thickest bottom liquid film can be heated to ~34.5 °C after reaching the equilibrium state, indicating the sufficient energy transfer from the biomimetic 3D structure to the liquid film, basing on the designed groove structure and the contacting mode of water with corresponding grooves, as displayed in Supplementary Fig. 10. The temperature gradient also occurs in darkness, just as displayed in the red line of Fig. 3g, which is consistence with the simulation result in Fig. 3h, indicating more vigorous water evaporation and lower surface temperature at the apex. Therefore, the position-related water evaporation feature occurs both in darkness and under one sun illumination but is enhanced under illumination, as shown in Fig. 3f–h. In addition, the wet biomimetic 3D evaporators with and without CNT (Supplementary Fig. 8b) possess almost the same temperature distribution across the entire structure in darkness, further proving the function of the structure and consequent water film inhomogeneity on water evaporation enhancement. Different from the biomimetic 3D evaporators, the temperature profiles of the 2D plane surface (blue lines in Fig. 3e–g) and pure water (black lines in Fig. 3e–g) display homogeneous distribution along the interface both in darkness and under solar illumination (Supplementary Fig. 11). Therefore, the formulation of temperature gradient of the biomimetic 3D evaporator under one sun illumination and in darkness can be attributed to the position-related competition between the energy absorption and transfer inside the system, and the position-related evaporation along the water film. Further combining with the water evaporation rates data in Fig. 2i, j, the enhancement of biomimetic 3D evaporator rests on the generation of structure inhomogeneity. With continuous evaporation, water can be fully evaporated on the evaporator with no water left (Supplementary Movie 2). The temperature gradient will further induce surface tension difference inside the liquid film, i.e., the well-known Marangoni effect. It will provide a thermocapillary force, which can be expressed as Eq. (1), inside the liquid film to drive the liquid transportation, which can be expressed as Eq. (2), from high temperature to low temperature, i.e., from the bottom liquid film to the apex liquid film. As we have simulated and analyzed that the apex possesses higher steam flux both in darkness and under one sun illumination, water is thus continuously transported to the site with a higher evaporation rate, which can realize effective and continuous water evaporation. Therefore, the unevenly utilized input energy is further reutilized in the form of thermocapillary force, i.e., the energy reutilization property of this system. The thermocapillary force τ, originated from temperature difference can be expressed as41,42 $$\tau = \Delta \gamma /L = \frac{{\gamma _{\mathrm{L}} - \gamma _{\mathrm{H}}}}{L} = \frac{{{\mathrm{d}}\gamma }}{{{\mathrm{d}}T}} \cdot \frac{{\Delta T}}{L}$$ where Δγ is surface tension difference, γL and γH are the surface tensions of liquid at low temperature and high temperature, respectively. dγ/dT is the coefficient of surface tension as a function of temperature, ΔT is the temperature difference between the apex liquid film and the bottom liquid film, L is the distance between the two positions. The liquid transportation inside the liquid film under the function of thermocapillary force can be expressed as43: $$\tau \sim \eta \frac{{v_{\mathrm{T}}}}{e}$$ where vT is the velocity induced by thermocapillary force, e is the effective thickness of the liquid film. Combining Eqs. (1) and (2), the liquid transportation flux QT induced by temperature difference can be expressed as $$Q_{\mathrm{T}}\sim v_{\mathrm{T}} \cdot S\sim \frac{S}{\eta } \cdot \frac{e}{L} \cdot \frac{{{\mathrm{d}}\gamma }}{{{\mathrm{d}}T}} \cdot \Delta T$$ where S is the effective cross-sectional area along the water moving direction. Thus, QT is in direct relation with ΔT, indicating that enhanced supplement of water to the apex with a higher evaporation rate can be kept during the continuous evaporation process. For the wet biomimetic 3D evaporator under one sun illumination, the temperature difference between the apex liquid film (~21.0 °C, blue line in Supplementary Fig. 9) and the bottom liquid film (~34.5 °C, black line in Supplementary Fig. 9) is about 13.5 °C, the water upward moving flux QT calculated from Eq. (3) is about 2.67 × 10−4 g s−1, which can meet the need for the timely water evaporation (1.81 × 10−4 g s−1). It is worth mentioning that the apex liquid film temperature is ~21.0 °C under one sun illumination, which is ~4.0 °C lower than the ambient temperature. Based on the previous investigations11,12, energy can also be directly collected from the surrounding environment, which contributed cooperatively with thermocapillary force to enhance solar-driven water evaporation. Another control structure, a 3D columnar structure, is prepared to prove that the generated thermocapillary force can indeed enhance water evaporation and energy efficiency. Compared with the biomimetic 3D structure, the 3D columnar structure possesses the same projected area, the same height and the same H/D ratio. The microcavities along each groove possess the same dimension, which results in a liquid film with a homogeneous thickness (Supplementary Fig. 12). In darkness, the surface temperature of the 3D columnar structure is higher than that on the biomimetic 3D structure. However, the water evaporation rate on the 3D columnar structure is lower than that on the biomimetic 3D structure (Supplementary Table 2, orange line in Fig. 3g). Without light, more energy is gathered from the surrounding environment for the biomimetic 3D structure. The contribution of thermocapillary force in enhancing water evaporation in darkness is hard to distinguish for the two structures. Under one sun illumination, the average surface temperatures of both the 3D columnar structure and the biomimetic 3D structure are almost the same, indicating that the energy capable from the surrounding environment can be considered as the same (Supplementary Table 2). However, the temperature profiles on the surfaces of the two structures are different. On the 3D columnar structure, the top surface temperature is higher than the bottom (orange line, Fig. 3e), whose trend is contrary to the biomimetic 3D structure, which leads to the opposite direction of the thermocapillary forces. The direction of the water flow induced by thermocapillary force for the 3D columnar structure is thus from top to bottom, where the water supplementation from the source to the evaporation surface is hindered during the continuous evaporation process. Without sufficient water supplementation, the water used for solar-driven evaporation is thus decreased, which finally leads to the reduced water evaporation rate comparing with the biomimetic 3D structure. In our system, the thermocapillary force can carry water from the bottom to the apex, the more vigorous evaporation site, to realize effective water evaporation and enhance energy efficiency. The above conclusion is established on the default condition that the water supplementation speed can match the water evaporation speed, which maintains a continuous liquid film for water evaporation. If the water supplementation speed cannot satisfy the water evaporation speed, the temperature gradient will be inversed as shown in Supplementary Fig. 13a, b on the smooth conical structure at first, where the temperature of the apex liquid film is higher than the bottom liquid film. However, the apex liquid film will be completely evaporated which finally leads to the dewetting of the liquid film on the smooth conical structure and the decrease of the effective contacting surface of water/structure. The water evaporation rate on such structure is much lower than that on the biomimetic 3D structure (Supplementary Fig. 13c). Therefore, the effective water supplementation on the biomimetic 3D structure also contributes to the realization of effective water evaporation. We further investigate the influence of the 3D evaporator morphology on the enhancement of solar-driven water evaporation. Another three evaporators with different H/D ratios (including one shorter and another two taller than previously used, with H/D ratios of 0.3, 1.4, and 2.0, respectively) are prepared through regulating the number of microcavities along each asymmetric groove without changing the initial dimension of the microcavity, as shown in Supplementary Fig. 2b, c. As indicated in the orange line of Fig. 3i, with the structure H/D ratio increasing from 0.3 to 0.7, 1.4, and 2.0, respectively, the solar-driven water evaporation rate increases from ~1.85 to ~2.28, ~2.54, and ~2.63 kg m−2 h−1, respectively. As displayed in Supplementary Fig. 14a, the bottom temperature has little change with the increasing of H/D ratio, while the apex liquid film temperature decreases drastically. Thus, the temperature difference increases (Fig. 3j), which leads to larger and larger thermocapillary force inside the liquid film with the increasing of H/D ratio. In addition, the energy can be acquired from the surrounding environment increases owing to the increased temperature difference between the apex liquid film and the surrounding environment, which further leads to the increasing trend of solar-driven water evaporation rate. The same tendency is acquired in darkness, as shown in Supplementary Fig. 14b and black lines in Fig. 3i, j, which further explains the increased water evaporation rate on the 3D evaporator in darkness as displayed in Fig. 2j. If the water evaporation rate in darkness is deduced from the water evaporation rate under one sun illumination, the net evaporation rates are ~1.43, ~1.44, ~1.47, and ~1.46 kg m−2 h−1 for evaporators with H/D ratio of 0.3, 0.7, 1.4, and 2.0, respectively. The net evaporation rates of the biomimetic 3D evaporators remain as a constant value, which approaches the theoretical water evaporation rate. The energy efficiency (η) of the biomimetic 3D evaporator is calculated as the percentage of the energy that is utilized by net water evaporation compared with the total energy of the incident sunlight to evaluate the photothermal performance. It is generally calculated via the following formula11,44: $$\eta = m\left( {L_{\mathrm{v}} + Q} \right)/P_{{\mathrm{in}}}$$ Where m is the net water mass-change rate (kg m−2 h−1), Lv is the latent heat of water evaporation (J kg−1)45, Q is the sensible heat of water (J kg−1), and Pin is the power of the incident simulated sunlight beam (J m−2 h−1). Due to the temperature gradient on the 3D evaporator, the energy efficiencies of different H/D ratios are calculated based on the average water/structure temperatures calculated from Supplementary Fig. 14a. The average interfacial temperature decreases from ~30.9 °C to ~28.9, ~22.7, and ~21.0 °C with the increasing of H/D ratio from 0.3 to 0.7, 1.4, and 2.0, respectively. The energy efficiencies calculated from the net evaporation rates are ~96.9, ~97.5, ~98.9, and ~98.3% correspondingly (blue line in Fig. 3i). In the control experiment, the 2D plane has an average interfacial temperature of ~33.1 °C and the net evaporation rate of ~0.87 kg m−2 h−1, the energy efficiency of which is only ~59.2%. Comparing with other structures, the biomimetic 3D evaporator has the highest efficiency, which can be attributed to the reutilizing of solar energy through the formation of temperature gradient that allows for additional energy capture from the ambient environment. High salinity, 25.0 wt% of sodium chloride (NaCl) solution, is prepared as the representative brine sample to demonstrate the solar desalination capability and durability of the biomimetic 3D evaporator. The desalination process is recorded by a camera. With continuous solar illumination and water evaporation, NaCl will crystallize on the evaporator as the concentration is approaching the critical concentration of the saturated solution (~26.4 wt%, 25 °C). The salt crystallization or accumulation position was spatially located at the apex of the 3D evaporator (Fig. 4a–d). As position-related water evaporation occurs along the liquid film (Fig. 3f–h), cooperating with the liquid film thickness gradient, a salinity gradient further generates along the sidewall of the biomimetic 3D evaporator, where the salt concentration of the apex liquid film is higher than that of the bottom liquid film. The apex liquid film is easier to reach the critical crystallization concentration compared with the bottom during the continuous water evaporation process, i.e., the higher the position on the biomimetic 3D evaporator, the easier for NaCl crystallization. For such a high salinity, NaCl will also crystallize on the sidewall of the 3D evaporator surface. Whereas, the crystallized NaCl on the sidewall can also flow along the spontaneously formed water flow from the bulk water and move upwardly to the apex of the biomimetic 3D evaporator (Supplementary Fig. 15, Supplementary Movie 3). Significantly, the salt free stands on the 3D evaporator and can be easily removed through leaning the evaporator (Fig. 4e, Supplementary Movie 4). After characterizing the detached salt through Micro-CT, a 3D salt/evaporator contacting surface with conical hole morphology inside the salt is clarified (Fig. 4f). As shown in Fig. 4g, h, the dimension of the conical hole is larger than the apex of the biomimetic 3D evaporator, which means that the crystallized salt is not directionally contacting with the 3D evaporator, but with a liquid layer in between (Fig. 4i). Even with salt crystallization at the apex, the solar steam generation rate does not obviously change (~2.24 kg m−2 h−1 under one sun illumination, and ~0.81 kg m−2 h−1 in darkness). Micro-CT images show that channels can be found inside the salt (Fig. 4g and Supplementary Movie 5). The continuous water film pathway was thus extended inside the crystallized salt at the apex (Fig. 4i). In addition, the temperature profile (Fig. 4j) during the solar desalination process shows the same tendency and value with that using pure water (red line, Fig. 3e). With the average interface temperature of ~30.1 °C and net evaporation rate of ~1.43 kg m−2 h−1, the efficiency under high salinity calculated from Eq. (4) is ~97.1%. Moreover, the localized crystallization phenomenon is universal for evaporators with different H/D ratios (Supplementary Fig. 16). Fig. 4: Localized salt crystallization mechanism on the biomimetic 3D evaporator. a–d Time sequence of optical captures displaying the localized crystallization process on the biomimetic 3D evaporator. e Optical image of the salt easy-removing property of the biomimetic 3D evaporator, the crystallized salt can be removed through leaning the evaporator. f Bottom view Micro-CT image of the removed NaCl crystal. g Side cross-sectional Micro-CT image of the detached crystal, which amplifies the conical hole morphology of the contacting interface. h Micro-CT image of the apex position of the biomimetic 3D evaporator. i Scheme of the continuous water film along the sidewall of the biomimetic 3D evaporator which extends to the localized crystal at the apex position. j Temperature profile along the sidewall of the biomimetic 3D evaporator with the existence of the localized crystal at the apex position. Inset is the corresponding infrared image. k Measured concentrations of Na+ in different brine samples (including 25, 20, 10, 1, and 0.1 wt% NaCl solution) before and after desalination. Orange and green columns represent the metal ion concentrations before and after purification, respectively. The broken line refers to the WHO Na+ concentration standards for drinkable water. l–n Optical image of the water sample with different heavy metal ions (including the aqueous solution of 10 wt% CoCl2, 20 wt% CuSO4, and mixed solution composed of 10 wt% NaCl and 10 wt% CuSO4) before and after solar purification. o Measured concentrations of corresponding metal ions before and after purification. Orange and green columns represent the metal ion concentrations before and after purification, respectively. As shown in Fig. 4k, after desalination, the Na+ concentration characterized by inductively coupled plasma mass spectroscopy (ICP-MS) is significantly decreased by four orders of magnitude to 3.6 mg L−1, and is approximately two orders of magnitude below the drinking water standards defined by the World Health Organization (WHO)46. Furthermore, it is versatile to brine samples with a lower NaCl concentration of 20.0, 10.0, 1.0, and 0.1 wt% (Fig. 4k). Moreover, the biomimetic 3D evaporator can also purify water with a high concentration of heavy metals. As shown in Fig. 4l–n, the colors of 10 wt% cobalt dichloride (CoCl2) and 20 wt% copper sulfate (CuSO4·5H2O) change from red and blue to clear after solar-driven water evaporation, corresponding to the reduction of Co2+ and Cu2+ from 45999.2 mg L−1 and 53002.5 mg L−1 to 3.9 mg L−1 and 2.3 mg L−1 (Fig. 4o), respectively. For mixed solution composed of 10 wt% NaCl and 10 wt% CuSO4·5H2O, the concentrations of Na+ and Cu2+ can be decreased simultaneously, with Na+ and Cu2+ decreased from 39447.1 mg L−1 and 27787.6 mg L−1 to 3.5 mg L−1 and 3.7 mg L−1, respectively, demonstrating its potential for purifying complex solutions. A batch purification apparatus composed of an inlet tube, condenser, brine sample container and an outlet tube is set up to simulate the practical water purification process (Fig. 5a, Supplementary Fig. 17)47. Evaporator composed of 3D structure array with H/D ratio of 1.4 is prepared, where the intervals of adjacent 3D structures are filled with flat planes (Fig. 5b). Natural seawater sample from Jiaozhou Bay, the Yellow Sea is used as the desalination sample. As shown in Fig. 5c–e and Supplementary Movie 6, the generated vapor continuously condensed on the top and side inner surfaces of the condenser, and then flow to the bottom surface, which is finally collected by the outlet tube. The gradually decreased seawater can be supplemented by the inlet tube to the container, and the purified water can be collected through the outlet tube, which endows the apparatus with continuous water purification property. The water collecting velocity (~1.72 kg m−2 h−1) is lower than the open system (~2.54 kg m−2 h−1), which can be attributed to the increased humidity in the closed system and residual on the inner wall of the condenser. In addition, the concentrations of all four primary ions (Na+, Mg2+, K+, and Ca2+) initially present in the seawater sample are significantly reduced (Fig. 5f), indicating effective purification of natural seawater. Solar endurance test in Fig. 5g shows that it presented a stable evaporation rate to the seawater for 10 days under one sun illumination for 9 h every day. The accumulated salt on the apex position of the 3D evaporators can be easily removed and collected (Fig. 5h–j), indicating that the biomimetic 3D evaporator is reliable for long-term solar desalination. Fig. 5: Solar desalination and durability of the biomimetic 3D evaporator. a Scheme of the batch purification prototype which simulates the practical solar water purification apparatus. Brine sample is introduced into the container through the inlet. Water evaporates on the 3D evaporator surface under illumination, and then condenses on the upper surface and sidewall of the condenser, which is finally collected by the outlet. b Optical image of the large area evaporator composed of arrays of biomimetic 3D structures with H/D ratio of 1.4. c Optical image of the batch purification prototype without solar illumination. d Optical image of the batch purification prototype under solar illumination, where most light is collected by the evaporator. e Optical capture of the generated solar steam condensing process on the inner wall of the condenser, which is finally collected by the outlet tube. Red dashed line represents the upper surface of purified water before being output by the outlet. f Measured concentrations of four primary ions in the actual seawater sample from Jiaozhou Bay, the Yellow Sea before and after desalination. Orange and green columns represent the metal ion concentrations before and after purification, respectively. g Solar endurance test results of arrayed biomimetic 3D evaporator continuously exposed under one sun illumination in the closed system batch purification prototype for 10 days with 9 h every day. h, i Optical images of the arrayed evaporators before (h) and after (i) removing the locally crystallized salt. j Optical image of the collected salt detached from the arrayed evaporator after the solar endurance test. In summary, inspired by the unique water transport property of the asymmetric capillary ratchet of bird beak and the pitcher plant peristome surface, we have constructed a biomimetic 3D evaporator for high-efficiency solar-driven water evaporation and desalination. Based on the developed resin system, the size-dependent resin refilling phenomenon occurs during the continuous additive manufacturing process. Surface distributed micropores are formed on the prepared surface, endowing the biomimetic 3D evaporator with ultra-fast water spreading property. Due to the designed morphology of the 3D structure with asymmetric grooves and the gradient microcavity arrays, the liquid film spreads on the structure surface displays position-related liquid film thickness and temperature gradient along the sidewall, which further leads to the thermocapillary force inside the liquid film and the capability to capture energy from the surrounding environment to enhance water evaporation and energy efficiency. High water evaporation rate of 1.17 kg m−2 h−1, high solar-driven water evaporation rate of 2.63 kg m−2 h−1 and high energy efficiency of larger than 96% can be achieved under one sun illumination with excellent stability even under high salinity (25 wt% NaCl solution). The locally crystallized salt free stands at the apex without contaminating the evaporator, and can be easily removed due to the extension of the inhomogeneous water film inside the crystal. In addition, energy efficiency and water evaporation are not influenced by salt accumulation, indicating its potential for sustainable and practical applications in the future. Preparation of the nanocomposite The UV curable resin system was formulated by mixing prepolymer, reactive dilute, photo initiator and other additions which all tailored to be active at the relevant wavelength of UV. Here, polyacrylate system composed of prepolymer acrylic resin, monomer di(ethylene glycol) ethyletheracrylate, photo initiator 2,4,6-phenylbis(2,4,6-trimethylbenzoyl)phosphine oxide, and crosslinker poly(ethylene glycol) diacrylate 700 was employed. Carbon nanotubes (CNTs) with a mean diameter of 100 nm, a length range of 20–200 μm were purchased from Sigma-Aldrich. Citrate sodium was an analytical grade and bought from Beijing Chemical Works (China). The individual components of the UV curable resin were mixed before the dispersion process. Then the carbon nanotube powder and the citrate sodium powder were mixed with the UV curable resin separately to form two mixtures. Then, the two mixtures were further mixed to form a slurry. The weight percentages of CNTs and citrate sodium in the slurry were 0.5 wt% and 20 wt%, respectively. The slurry was transformed into the ultrasonic cell pulverizer to achieve a stabilized dispersion. Finally, after degassing through vacuum the composite resin was acquired. Post-3D printing treatment After 3D printing, printed parts were developed in ethanol for 2 min with ultrasonic treatment to remove the uncured resin, then developed in a 1:1 vol/vol solution of ethanol and water to remove the citrate sodium solidified on the surface. To enhance the mechanical properties, a post-curing process was performed in a tank with 20 multidirectional LEDs emitting 405 nm light for 30 min at room temperature. Before use, the front side of 3D printed structures was plasma treated to endow the upper surface with hydrophilicity for water spreading. After plasma treatment, surface hydrophilicity increases due to the increased amount of surface –OH. It can be clarified from the X-ray photoelectron spectroscopy (XPS) results as displayed in Supplementary Fig. 18, where the oxygen-containing groups rises. Originating from the bond scission and incorporation of oxygen onto the cured resin surface, surface hydrophilicity is improved after plasma treatment. The detailed characterization strategies are shown in Supplementary Note 2. Numerical simulation Numerical simulation was performed using a commercial finite element software package COMSOL Multiphysics 5.4. The water evaporation process of the whole system is simulated by solving the below equations: $$- D\nabla ^2c_{\mathrm{v}} = 0$$ $$c_{\mathrm{v}} = \phi c_{{\mathrm{sat}}}$$ $$g = - D\nabla c_{\mathrm{v}}$$ where Cv is the concentration of the vapor in the air, D is the diffusion coefficient, Csat is the saturated vapor concentration, ϕ is the relative humidity, g is the steam diffusion flux. The vapor concentration at boundaries of the water surface is set to be saturated vapor concentration, Csat. The relative humidity of the environment is set as 0.5. The heat transfer is simulated by solving Eq. (8) and Eq. (9): $$- k\nabla ^2T = Q$$ where T is temperature, k is heat conductivity coefficient, Q is a heat source. The heat source at the boundaries of water surfaces is set to be Qevap as shown below: $$Q_{{\mathrm{evap}}} = - L_{\mathrm{v}}g_{{\mathrm{evap}}}$$ Where gevap is the diffusion flux at the water surface, Lv is coefficient latent heat. Another heat source is set at the boundaries of solid surfaces to represent the heat from illumination. The environment temperature is set as 293.15 K. Csat increases with temperature T, which is taken into account in the simulation. All the parameters of D, k, Csat, Lv are taken from build in material library in COMSOL Multiphysics 5.4. Because of the symmetry of the structure, one groove of the wet biomimetic structure in darkness and under one sun illumination is simulated. The detailed geometry of the model used for numerical simulation, and detailed boundary conditions are shown in Supplementary Figs. 19, 20 and Supplementary Table 3. The authors decalre that the main data supporting the findings of this study are contained within the paper. All other relevant data are available from the corresponding author upon reasonable request. Oki, T. & Kanae, S. Global hydrological cycles and world water resources. Science 313, 1068–1072 (2006). Article ADS CAS PubMed Google Scholar Ni, G. et al. Steam generation under one sun enabled by a floating structure with thermal concentration. Nat. Energy 1, 16126 (2016). Shannon, M. A. et al. Science and technology for water purification in the coming decades. Nature 452, 301–310 (2008). Lewis, N. S. Research opportunities to advance solar energy utilization. Science 351, aad1920 (2016). Ghasemi, H. et al. Solar steam generation by heat localization. Nat. Commun. 5, 4449 (2014). Liu, Y. M. et al. A bioinspired, reusable, paper-based system for high-performance large-scale evaporation. Adv. Mater. 27, 2768–2774 (2015). Zhou, L. et al. Self-assembly of highly efficient, broadband plasmonic absorbers for solar steam generation. Sci. Adv. 2, e1501227 (2016). Cui, L. F. et al. High rate production of clean water based on the combined photo-electro-thermal effect of graphene architecture. Adv. Mater. 30, 1706805 (2018). Mandal, J. et al. Scalable, "dip-and-dry" fabrication of a wide-angle plasmonic selective absorber for high-efficiency solar-thermal energy conversion. Adv. Mater. 29, 1702156 (2017). Wicklein, B. et al. Thermally insulating and fire-retardant lightweight anisotropic foams based on nanocellulose and graphene oxide. Nat. Nanotechnol. 10, 277–283 (2015). Shi, Y. S. et al. A 3D photothermal structure toward improved energy efficiency in solar steam generation. Joule 2, 1171–1186 (2018). Li, X. et al. Enhancement of interfacial solar vapor generation by environmental energy. Joule 2, 1331–1338 (2018). Zhu, L. L., Gao, M. M., Peh, C. K. N., Wang, X. Q. & Ho, G. W. Self-contained monolithic carbon sponges for solar-driven interfacial water evaporation distillation and electricity generation. Adv. Energy Mater. 8, 1702149 (2018). Ito, Y. et al. Multifunctional porous graphene for high-efficiency steam generation by heat localization. Adv. Mater. 27, 4302–4307 (2015). Zhu, M. W. et al. Tree-inspired design for high-efficiency water extraction. Adv. Mater. 29, 1704107 (2017). Li, X. Q. et al. Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path. Proc. Natl Acad. Sci. USA. 113, 13953–13958 (2016). Jiang, Q. S. et al. Bilayered biofoam for highly efficient solar steam generation. Adv. Mater. 28, 9400–9407 (2016). Yang, P. H. et al. Solar-driven simultaneous steam production and electricity generation from salinity. Energy Environ. Sci. 10, 1923–1927 (2017). Xue, G. B. et al. Water-evaporation-induced electricity with nanostructured carbon materials. Nat. Nanotechnol. 12, 317–321 (2017). Wang, W. B. et al. Simultaneous production of fresh water and electricity via multistage solar photovoltaic membrane distillation. Nat. Commun. 10, 3012 (2019). Zhang, L. B., Tang, B., Wu, J. B., Li, R. Y. & Wang, P. Hydrophobic light-to-heat conversion membranes with self-healing ability for interfacial solar heating. Adv. Mater. 27, 4889–4894 (2015). Li, W., Li, Z., Bertelsmann, K. & Fan, D. E. Portable low-pressure solar steaming-collection unisystem with polypyrrole origamis. Adv. Mater. 31, 1900720 (2019). Zhou, X. Y., Zhao, F., Guo, Y. H., Zhang, Y. & Yu, G. H. A hydrogel-based antifouling solar evaporator for highly efficient water desalination. Energy Environ. Sci. 11, 1985–1992 (2018). Xu, N. et al. A water lily–inspired hierarchical design for stable and efficient solar evaporation of high-salinity brine. Sci. Adv. 5, eaaw7013 (2019). Article ADS PubMed PubMed Central Google Scholar Zhao, F. et al. Highly efficient solar vapour generation via hierarchically nanostructured gels. Nat. Nanotechnol. 13, 489–495 (2018). Kuang, Y. et al. A high‐performance self‐regenerating solar evaporator for continuous water desalination. Adv. Mater. 31, 1900498 (2019). Ni, G. et al. A salt-rejecting floating solar still for low-cost desalination. Energy Environ. Sci. 11, 1510–1519 (2018). Shi, Y. et al. Solar evaporator with controlled salt precipitation for zero liquid discharge desalination. Environ. Sci. Technol. 52, 11822–11830 (2018). Ren, H. Y. et al. Hierarchical graphene foam for efficient omnidirectional solar-thermal energy conversion. Adv. Mater. 29, 1702590 (2017). Cooper, T. A. et al. Contactless steam generation and superheating under one sun illumination. Nat. Commun. 9, 5086 (2018). Article ADS CAS PubMed PubMed Central Google Scholar Xia, Y. et al. Spatially isolating salt crystallisation from water evaporation for continuous solar steam generation and salt harvesting. Energy Environ. Sci. 12, 1840–1847 (2019). Gao, M., Zhu, L., Peh, C. K. N. & Ho, G. W. W. Solar absorber material and system designs for photothermal water vaporization towards clean water and energy production. Energy Environ. Sci. 12, 841–864 (2019). Tao, P. et al. Solar-driven interfacial evaporation. Nat. Energy 3, 1031–1041 (2018). Prakash, M., Quere, D. & Bush, J. W. M. Surface tension transport of prey by feeding shorebirds: The capillary ratchet. Science 320, 931–934 (2008). Chen, H. W. et al. Continuous directional water transport on the peristome surface of Nepenthes alata. Nature 532, 85–89 (2016). Li, C. X. et al. Uni-directional transportation on peristome-mimetic surfaces for completely wetting liquids. Angew. Chem. Int. Ed. 55, 14988–14992 (2016). Li, C. X., Wu, L., Yu, C. L., Dong, Z. C. & Jiang, L. Peristome-mimetic curved surface for spontaneous and directional separation of micro water-in-oil drops. Angew. Chem. Int. Ed. 56, 13623–13628 (2017). Wu, L. et al. Bioinspired ultra-low adhesive energy interface for continuous 3D printing: reducing curing induced adhesion. Research 2018, 4795604 (2018). Article CAS PubMed Central PubMed Google Scholar Wang, Y. C., Zhang, L. B. & Wang, P. Self-floating carbon nanotube membrane on macroporous silica substrate for highly efficient solar-driven interfacial water evaporation. ACS Sustain. Chem. Eng. 4, 1223–1230 (2016). Yu, C. L. et al. Facile preparation of the porous pdms oil-absorbent for oil/water separation. Adv. Mater. Interfaces 4, 1600862 (2017). Chaudhury, M. K. & Whitesides, G. M. How to make water run uphill. Science 256, 1539–1541 (1992). Deegan, R. D. et al. Capillary flow as the cause of ring stains from dried liquid drops. Nature 389, 827–829 (1997). de Gennes, P. G., Brochard-Wyart, F. & Quere, D. Capillarity and Wetting Phenomena: Drops, Bubbles, Pearls, Waves (Springer, 2010). Li, X. et al. Measuring conversion efficiency of solar vapor generation. Joule 3, 1798–1803 (2019). Hendersonsellers, B. A new formula for latent-heat of vaporization of water as a function of temperature. Q. J. R. Meteorol. Soc. 110, 1186–1190 (1984). World Health Organization (WHO). Safe Drinking-Water from Desalination (WHO, 2011). Kabeel, A. E. & El-Agouz, S. A. Review of researches and developments on solar stills. Desalination 276, 1–12 (2011). We acknowledge funding of the National Key R&D Program of China (Grant Nos. 2018YFA0208501, 2016YFB0401603, 2016YFC1100502, and 2016YFB0401100), the National Natural Science Foundation (Grant Nos. 51803219, 51773206, and 91963212), K.C. Wong Education Foundation and Beijing National Laboratory for Molecular Sciences (BNLMS-CXXM-202005). N.X.F. and T.G. are grateful for the seed provided by the MIT Energy Initiative. These authors contributed equally: Lei Wu, Zhichao Dong Key Laboratory of Green Printing, Institute of Chemistry, Chinese Academy of Sciences, Zhongguancun North First Street 2, Beijing, 100190, PR China Lei Wu, Zheren Cai, Yu Zhang & Yanlin Song CAS Key Laboratory of Bio-inspired Materials and Interfacial Sciences, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, 29 Zhongguancun East Road, Beijing, 100190, PR China Zhichao Dong & Chuxin Li University of Chinese Academy of Sciences, Beijing, 100049, PR China Zheren Cai, Yu Zhang & Yanlin Song Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA Turga Ganapathy & Niocholas X. Fang Beihang University, Xueyuan Road No. 37, Beijing, 100191, PR China Cunlong Yu Lei Wu Zhichao Dong Zheren Cai Turga Ganapathy Niocholas X. Fang Chuxin Li Yu Zhang Yanlin Song L.W. and Z.D. contributed equally to this work. Y.S. conceived and designed the experiments. L.W., Z.D., C.L., C.Y. and Y.Z. performed the experiments. Y.S., L.W., and Z.D. analyzed the data. Z.C., T.G., and N.F. conducted the numerical simulation. L.W. and Z.D. wrote the original paper, Y.S. helped revise it. All authors discussed the results and commented on the paper. Correspondence to Yanlin Song. Peer review information Nature Communications thanks Ho-Suk Choi, Xiwang Zhang and the other, anonymous, reviewer for their contribution to the peer review of this work. Peer reviewer reports are available. Description of Additional Supplementary File Supplementary Movie 1 Wu, L., Dong, Z., Cai, Z. et al. Highly efficient three-dimensional solar evaporator for high salinity desalination by localized crystallization. Nat Commun 11, 521 (2020). https://doi.org/10.1038/s41467-020-14366-1 Highly efficient and salt rejecting solar evaporation via a wick-free confined water layer Lenan Zhang Xiangyu Li Evelyn N. Wang Kaijie Yang Tingting Pan Yu Han A reconfigurable and magnetically responsive assembly for dynamic solar steam generation Yajie Hu Hongyun Ma Liangti Qu Nature Inspired MXene-Decorated 3D Honeycomb-Fabric Architectures Toward Efficient Water Desalination and Salt Harvesting Zhiwei Lei Xuantong Sun Xueji Zhang Nano-Micro Letters (2022) Superlyophilic Interfaces Assisted Thermal Management Xianfeng Luo Zhongpeng Zhu Lei Jiang Chemical Research in Chinese Universities (2022) Waiting for Nature Water 2020 Top 50 Chemistry and Materials Sciences Articles
CommonCrawl
Evaluation of lime and hydrothermal pretreatments for efficient enzymatic hydrolysis of raw sugarcane bagasse Maira Prearo Grimaldi1, Marina Paganini Marques1, Cecília Laluce1, Eduardo Maffud Cilli1 & Sandra Regina Pombeiro Sponchiado1 Biotechnology for Biofuels volume 8, Article number: 205 (2015) Cite this article Ethanol production from sugarcane bagasse requires a pretreatment step to disrupt the cellulose-hemicellulose-lignin complex and to increase biomass digestibility, thus allowing the obtaining of high yields of fermentable sugars for the subsequent fermentation. Hydrothermal and lime pretreatments have emerged as effective methods in preparing the lignocellulosic biomass for bioconversion. These pretreatments are advantageous because they can be performed under mild temperature and pressure conditions, resulting in less sugar degradation compared with other pretreatments, and also are cost-effective and environmentally sustainable. In this study, we evaluated the effect of these pretreatments on the efficiency of enzymatic hydrolysis of raw sugarcane bagasse obtained directly from mill without prior screening. In addition, we evaluated the structure and composition modifications of this bagasse after lime and hydrothermal pretreatments. The highest cellulose hydrolysis rate (70 % digestion) was obtained for raw sugarcane bagasse pretreated with lime [0.1 g Ca(OH)2/g raw] for 60 min at 120 °C compared with hydrothermally pretreated bagasse (21 % digestion) under the same time and temperature conditions. Chemical composition analyses showed that the lime pretreatment of bagasse promoted high solubilization of lignin (30 %) and hemicellulose (5 %) accompanied by a cellulose accumulation (11 %). Analysis of pretreated bagasse structure revealed that lime pretreatment caused considerable damage to the bagasse fibers, including rupture of the cell wall, exposing the cellulose-rich areas to enzymatic action. We showed that lime pretreatment is effective in improving enzymatic digestibility of raw sugarcane bagasse, even at low lime loading and over a short pretreatment period. It was also demonstrated that this pretreatment caused alterations in the structure and composition of raw bagasse, which had a pronounced effect on the enzymes accessibility to the substrate, resulting in an increase of cellulose hydrolysis rate. These results indicate that the use of raw sugarcane bagasse (without prior screening) pretreated with lime (cheaper and environmentally friendly reagent) may represent a cost reduction in the cellulosic ethanol production. Following the world trend for more research on alternative fuels, Brazilian sugar, and ethanol industry has shown interest in sustainable technologies that can be aggregated to its productive chain. Efforts are currently being directed toward the inclusion of sugarcane bagasse, straw, and tops in the production cycle of second-generation ethanol [1–6]. Due to its abundance and low cost, sugarcane bagasse is considered an interesting raw material for bioconversion since the sugars contained in cellulose and hemicellulose fractions represent the substrates that can be used by yeast for cellulosic ethanol production. The use of this biomass would bring economic and ecological benefits because it allows increasing production without the need to increase the planted area and solving the problem of disposal of this residue [4, 7–9]. Due to its recalcitrant structure, pretreatment is a necessary step to change some structural characteristics of bagasse and to increase cellulose accessibility to hydrolytic enzymes in order to provide high yields of fermentable sugars for subsequent fermentation [10–16]. Considering that pretreatment represents the second most expensive step in the conversion of biomass into ethanol, the great challenge of this technology is to find an appropriate strategy to disrupt the lignocellulosic complex, allowing enzymatic hydrolysis with low loads of enzymes and low conversion times, in a cost-effective manner and environmentally sustainable [5, 14, 17]. Physical, chemical, physicochemical, and biological pretreatments are currently applied to different lignocellulosic biomass but the choice of appropriate pretreatment must take into account some factors, such as (1) increase in accessible surface area, (2) cellulose decrystallization, (3) modification of the lignin structure, (4) solubilization of hemicellulose and/or lignin, (5) no significant hemicellulose and cellulose degradation; (6) increased yield of fermentable sugars after enzymatic hydrolysis; (7) low generation of toxic compounds potentially inhibitory for yeasts; (8) reduction of biomass size is not required; (9) use of cheaper and environmentally friendly reagents; and (10) catalyst recovery and/or solvent recycling. These factors significantly affect costs associated with the pretreatment step [3, 8, 11, 12, 18, 19]. Hydrothermal and alkaline pretreatments have emerged as effective methods in preparing the lignocellulosic biomass for enzymatic hydrolysis because they operate under mild temperature and pressure conditions, have less sugar degradation compared with acid pretreatment and also cause delignification and deacetylation depending on the pretreatment severity, greatly enhancing carbohydrate digestibility [10, 17]. Hydrothermal pretreatment, also called liquid hot water pretreatment, has economic advantages and is environmental friendly because it uses only water as reaction medium without additional chemicals, does not require special non-corrosive reactor or preliminary feedstock size reduction and produces small amounts of undesired degrading compounds, such as furfural. The main effect of this pretreatment is to solubilize mainly hemicellulose and to cause structural changes in lignin, which contribute to the reduction of biomass recalcitrance, making cellulose more susceptible to enzymatic action [9, 12, 15, 20]. Laser et al. [21] reported that liquid hot water pretreatment promoted 86 % cellulose conversion by simultaneous saccharification and fermentation, 82 % xylan recovery from sugarcane bagasse and no inhibition of the glucose fermentation rate. Lime pretreatment is another attractive method because calcium hydroxide is much cheaper than other alkalis, has low toxicity to the environment and can be easily recovered from hydrolysate as insoluble calcium carbonate with carbon dioxide and subsequently, calcium hydroxide can be regenerated using lime kiln technology. This pretreatment is very effective in the removal of amorphous substances, such as lignin and hemicellulose, because it cleaves α- and β-ether bonds in phenolic units and β-ether linkages in non-phenolic units, which causes disruption of the lignin structure and changes in the degree of polymerization and crystallinity of cellulose, enhancing enzymes accessibility to the substrate. Compared with acid and hydrothermal pretreatments, alkaline methods cause less degradation of cellulosic fraction, which results in greater release of sugars during enzymatic hydrolysis [10, 12, 16, 18]. Rabelo et al. [22] reported that higher yields of total reducing sugars were obtained after enzymatic hydrolysis of lime-pretreated sugarcane bagasse compared to that treated with alkaline peroxide hydrogen. The effectiveness of pretreatment to increase the digestibility of lignocellulosic biomass is dependent on substrate structure and composition well as on pretreatments conditions. In this sense, the aim of this study was to evaluate the effect of the hydrothermal and lime pretreatments on structure, composition, and susceptibility to enzymatic hydrolysis on raw sugarcane bagasse coming from mill without prior screening. Although hydrothermal and lime pretreatments have been studied on different types of lignocellulosic biomass, only one study was performed using sugarcane bagasse as it comes from mill [23]. The use of such bagasse may contribute to reduce operating costs because it is not submitted to any preparation step (such as screening), which are expensive and time consuming. In this paper, we show that lime pretreatment was more effective than hydrothermal pretreatment to promote higher cellulose digestibility rates. It was also demonstrated that this increased hydrolysis rate is related to changes in the structure and composition of bagasse occurred during lime pretreatment. Enzymatic hydrolysis of pretreated sugarcane bagasse To evaluate the efficiency of the hydrothermal and lime pretreatments to enhance the digestibility of raw sugarcane bagasse, the rates of conversion of cellulose into glucose during enzymatic hydrolysis of pretreated bagasse were measured. As shown in Fig. 1, the highest hydrolysis rates were obtained for bagasse pretreated with lime and the pretreatment time exerted more influence on glucose release from bagasse pretreated with lime compared to hydrothermal pretreatment and untreated bagasse. After 72 h of enzymatic hydrolysis, glucose released from bagasse pretreated with lime reached 256; 320; and 384 mg/g dry bagasse, corresponding to 52, 57, and 70 % cellulose digestion at 7, 30, and 60 min of pretreatment, respectively. For hydrothermal pretreatment, values were 83; 97 and 101 mg/g dry bagasse, whose cellulose digestion percentage varied from 17 to 21 %, considering the same pretreatment time. These values were very similar to those obtained for hydrolysis of untreated bagasse. Glucose release during the enzymatic hydrolysis of raw sugarcane bagasse submitted to lime (LIME) and hydrothermal (HYDR) pretreatments for 7, 30, and 60 min compared to untreated bagasse. The lines represent an exponential fit using equation: y = y 0 + A × exp(R 0 x) Our results also showed evidence that cellulose digestion depends on the pretreatment time. Sugarcane bagasse submitted to lime pretreatment exhibited an increase of 208, 223, and 280 % in glucose release compared to hydrothermal pretreatment after 7, 30, and 60 min of pretreatment, respectively. The statistical analysis of data confirmed that the variables studied (time and pretreatment) and interaction between them have a significant effect (p < 0.05) on cellulose digestion. Time is very important parameter for an economic analysis of the process because it allows evaluating if the increase in glucose released during saccharification compensates the energy cost when using longer pretreatment periods [4, 5, 19]. Studies conducted by Playne [32] obtained 60 % cellulose digestion when sugarcane bagasse was pretreated with 0.12 Ca(OH)2/g dry raw during 8 days at 20 °C. Fuentes et al. [33] and Rabelo et al. [34] obtained glucose yield of 228.45 mg/g dry raw for sugarcane bagasse pretreated with 0.4 g Ca(OH)2/g dry biomass for 90 h at 90 °C. Chang, Holtzapple, and Nagwany [24], using sugarcane bagasse pretreated with 0.1 gCa(OH)2/g dry raw, obtained yield of 300 mg/g dry bagasse after 1 h of pretreatment at 120 °C. In the present study, higher glucose yield (320 mg/g dry raw) was obtained when bagasse was treated with the same amount of lime (0.1 g Ca(OH)2/g dry raw) but within a shorter time (30 min). Chemical composition of pretreated sugarcane bagasse It is known that the rate of enzymatic hydrolysis of lignocellulosic substrates is related to changes in biomass composition and structure occurred during pretreatments. In order to explain the different percentages of cellulose digestion obtained, the cellulose, hemicellulose, and lignin contents of the bagasse before and after lime and hydrothermal pretreatments were determined. As can be seen in Table 1, the raw sugarcane bagasse (control) used in this study presented cellulose (45 %), hemicelluloses (33 %), and lignin (24 %) composition similar to those reported in literature for the same material, whose values vary from 39 to 45 % for cellulose, 26–36 % for hemicellulose and 11–25 % for lignin [35]. Table 1 Chemical composition of the raw sugarcane bagasse after hydrothermal (HYDR) and lime (LIME) pretreatments Analyzing the composition of bagasse submitted to different pretreatments, it was observed that the greatest changes in lignin, hemicellulose, and cellulose contents occurred in bagasse pretreated with lime compared to hydrothermally pretreated bagasse. After 60 min of lime pretreatment, the lignin percentage decreased from 23.77 to 16.7 %, the hemicelluloses content remained practically unchanged (varying from 32.77 to 31.86 %), while cellulose content increased from 44.49 to 49.56 % (Table 1). These alterations resulted in greater mass reduction in bagasse pretreated with lime compared to hydrothermally pretreated bagasse. The pretreatment yield, expressed as percentage of initial material, ranged from 51 to 75 % for lime pretreatment and from 70 to 89 % for hydrothermal pretreatment, being proportional to the increase in pretreatment time (Table 1). Rabelo, Maciel, and Costa [23] also evaluated the effect of lime pretreatment on sugarcane bagasse and obtained pretreatment yield of 58.73 % using 0.4 g Ca(OH)2/g dry bagasse for 90 h at 90 °C. We obtained lower yield (51.1 %) with low lime loading 0.1 g Ca(OH)2/g dry bagasse and also shorter pretreatment time (60 min). Figure 2 shows more clearly that lime pretreatment affected mainly the lignin fraction, which was gradually removed with increasing pretreatment time, reaching 30 % of solubilization after 60 min, while no lignin solubilization occurred in hydrothermal treatment. The solubilization of the hemicellulose fraction was lower for both pretreatments, ranging from 3 to 7 % after 30 and 60 min of pretreatment. It was also observed that hydrothermal pretreatment removed a small portion of cellulose, and only 1.5 to 3 % was solubilized. In contrast, the cellulose content in bagasse pretreated with lime increased to 11 % after 30 and 60 min of pretreatment. Accumulation and solubilization of lignin, hemicellulose, and cellulose of raw sugarcane bagasse submitted to hydrothermal and lime pretreatments Our results corroborate those of other studies, such as Chang et al. [36, 37], Mosier et al. [38], Hendriks and Zeeman [10], Rabelo et al. [23]. They also reported that lime pretreatment has major effect on delignification, accompanied by a small dissolution of hemicelluloses, but cellulose in not affected in this pretreatment. The lack of cellulose degradation can be explained by its high polymerization and crystallinity degree and low reactivity with alkali due to its relative stability under alkaline conditions. However, hemicellulose is more labile and consequently dissolution of this polysaccharide can occur [11, 39–42]. Cellulose enrichment has a great importance for the production of ethanol from biomass, because no degradation of the cellulosic fraction results in higher concentration of fermentable sugars after enzymatic hydrolysis of cellulose, which is essential for economic viability of the bioconversion process [3, 5, 9, 43]. Similar study was carried out by Chang, Nagwani, and Holtzapple [24] using sugarcane bagasse pretreated with 0.1 g Ca(OH)2/g dry biomass at 120 °C for 1 h, achieving 19 % of lignin solubilization, 1 % of hemicellulose solubilization, and 7 % of cellulose accumulation. In the present work, the same pretreatment conditions described by these authors were used, but our results were superior using raw sugarcane bagasse: 30 % of lignin and 5 % of hemicellulose were removed, resulting in 11 % increase in the cellulose content. This variation in results may be related to difference in particle size, processing conditions and sugarcane cultivars [3, 44]. In order to assess the significance of the effects of pretreatment time and type on the solubilization of lignin, hemicellulose, and cellulose fractions in pretreated bagasse, analysis of variance (ANOVA) was performed using 95 % confidence level. The analysis of data showed that the variables studied (time and pretreatment) as well as interaction between them exerted significant influence on the delignification of pretreated bagasse, with p value less than 0.05. The same behavior was observed for cellulose, all the variables studied and their interactions are significant (p < 0.05), and the effect of treatment, followed by the treatment/time interaction caused greater change in this fraction. For hemicellulose, only time was significant (p < 0.05) and the treatment/time interaction did not significantly influence the solubilization of this fraction (p > 0.05). Thus, this analysis confirmed that the lime pretreatment affected the lignin and cellulose fractions, inducing high lignin solubilization and cellulose accumulation proportional to the pretreatment time. Several studies have shown that variations in the composition of biomass submitted to different pretreatments can be related to pH variations and holding time conditions, which affect the pretreatment severity and consequently have a great effect on enzymatic hydrolysis [18, 45, 46]. In present study, the "severity factor" was used as parameter to compare the effects of lime and hydrothermal pretreatments on raw sugarcane bagasse. Figure 3 shows lignin solubilization as responses to severity factor calculated and pH obtained in hydrothermal (pH 2.9, 3.6, and 4.4) and lime (pH 6.2, 6.5, and 6.7) pretreatments for times of 7, 30 and 60 min, respectively. Lignin solubilization as responses to pH and calculated severity factor in different times of pretreatments As can be seen in Fig. 3, lime pretreatment exhibited higher severity factor (6.2–6.5) compared to hydrothermal pretreatment (2.9 e 4.4). It was also observed that higher severity factor is related to increased lignin solubilization and alkaline pH, resulting from lime pretreatment. These results are consistent with other studies that consider pretreatment pH as an important factor when analyzing the pretreatment severity on lignin solubilization. When pretreatment is carried out at alkaline pH under mild conditions (below 140 °C), it affect the biomass composition, reducing mainly lignin content due to cleavage of ester linkages joining phenolic acids: the nucleophilic acyl substitution of ester bonds normally takes place during reaction with an alkaline salt (calcium hydroxide). This promotes lignin solubilization, thereby making biomass more digestible, resulting in an increased hydrolysis yield of glucose as consequence of the high enzyme catalyzed cellulose degradation [18, 47]. Our results confirmed that lime pretreatment had a more pronounced effect on the efficiency of enzymatic hydrolysis of raw bagasse compared to hydrothermal pretreatment (Fig. 1). The high percentage of cellulose digestion obtained in bagasse pretreated with lime indicates that the cellulosic fraction is more accessible to enzymes probably due to alterations in bagasse composition after pretreatment. As can be seen in Fig. 4, the pretreatment with lime promoted greater bagasse delignification (30 % of solubilization after 60 min), resulting in higher cellulose digestion. Bagasse pretreated with lime reached 70 % of cellulose digestion, while in hydrothermal pretreated bagasse only 21 % of cellulose was converted into glucose after 60 min of pretreatment. This value was very close to that obtained for untreated bagasse (14 % of cellulose digestion). Chemical composition and percentage of cellulose digestion of raw sugarcane bagasse submitted to hydrothermal (HYDR) and lime (LIME) pretreatments According to literature, the presence of lignin in biomass restricts enzymatic hydrolysis because it acts as a physical barrier preventing the accessibility of cellulase to substrate and also as a competitive adsorbent for cellulases, reducing the activity of adsorbed enzymes [15, 43]. Chang and Holtzapple [48] reported that there are correlations between enzymatic digestibility and three structural factors of biomass: lignin content, crystallinity, and acetyl content. They concluded that (1) extensive delignification is sufficient to obtain high digestibility regardless of acetyl content and crystallinity, (2) delignification and deacetylation remove parallel barriers to enzymatic hydrolysis, and (3) crystallinity significantly affects initial hydrolysis rates but has less effect on sugar yields. Lee and Fan [49] reported that the enzymatic hydrolysis rate depends on enzyme adsorption and the effectiveness of adsorbed enzymes, instead of the diffusive mass transfer of enzymes. Lignin removal improves enzyme effectiveness by eliminating nonproductive adsorption sites, increasing access to cellulose and hemicellulose. In addition, alkaline saponification of acetyl and uronic ester groups in hemicellulose reduces the steric hindrance of hydrolytic enzymes and also contributes to enhance the enzymatic accessibility of polysaccharides [11, 42]. Thus, our results confirmed that the high glucose yields obtained after enzymatic hydrolysis of raw bagasse pretreated with lime is probably related to the low lignin and hemicellulose contents and the high cellulose content of bagasse after pretreatment. Structural analysis in the pretreated sugarcane bagasse Several studies have shown that lime pretreatments had a remarkable effect on lignocellulosic biomass structure [16, 23, 50]. Calcium ions extensively crosslinked lignin molecules under alkaline conditions, disrupting of chemical bonds stiffening lignocellulose by removing lignin and acetyl groups from hemicelluloses, which results in increased biomass porosity, effectively improving the enzymatic digestibility of pretreated material [48, 51, 52]. In the present study, modifications on the surface of bagasse pretreated with lime for 30 and 60 min were analyzed by scanning electron microscopy (Fig. 5). Scanning electron microscopy of raw sugarcane bagasse without pretreatment (a) and pretreated with lime (b) From the analysis of Fig. 5, it was observed that, although tissue integrity was maintained to some extent, there are signs of fragmentation on the surface of bagasse pretreated with lime. For untreated samples, an ordered structure of matrix with whole cells was observed (Fig. 5a), while bagasse pretreated with lime presented considerable damage in its structure, including rupture of the cell wall, where inner parts of the cell were exposed (Fig. 5b). Disaggregation of cell bundles and the formation of long cellular structures in pretreated bagasse was also observed (Fig. 5b). Rezende et al. [50], using a two-step pretreatment (diluted acid followed by alkaline treatment with NaOH) reported that in bagasse submitted to NaOH at concentrations lower than 0.5 %, the cell bundles start to dismantle and fibers become detached from one other. When NaOH concentrations above 0.5 % are used, the unidirectional separation of the cell wall bundles on the pretreated samples was observed. These results showed that lignin removal caused destructuring of the bagasse cell wall, which occurs in two levels. The first level refers to the loss of cohesion between neighboring cell walls, while the second level corresponds to degradation inside the cell wall, caused by peeling off and formation of holes. The results obtained in the present work are in agreement with these observations, since the disruption of fibers occurred to bagasse pretreated with lime (Fig. 5) was probably due to the removal of lignin after lime pretreatment (30 % delignification after 60 min), which resulted in increased conversion rate of cellulose into glucose (70 % saccharification), as shown in Fig. 4. Rezende et al. [50], also reported that these morphological alterations are important for improving cellulose hydrolysis because enzymatic action is hindered when bagasse fibers are packed and their surfaces are protected by lignin, which acts as an 'enzymatic trap', causing an unproductive adsorption of cellulases to the substrate. Thermogravimetric analysis (TGA) was performed to better understand the effects of lime pretreatment on the structure of raw sugarcane bagasse. This analysis provides a useful tool to characterize bagasse fibers after pretreatment because the thermal behavior of lignocellulosic biomass is closely related to the chemical composition of fibers and physical characteristics of lignin, hemicellulose, and cellulose during thermal decomposition of pretreated bagasse [53]. The Figs. 6, 7, and 8 show the thermogravimetric profiles of untreated bagasse and those pretreated with lime for 30 and 60 min. The DTA (Differential Thermal Analysis) curves showed three exothermic events, in agreement with the TG (Thermogravimetric) curves, indicating three weight loss stages: the first stage at 100 °C is attributed to the elimination of moisture accompanied by 8 % mass loss; the second stage occurs between 300 and 350 °C with weight loss of about 63–67 % and the third stage occurs at temperature range of 380–400 °C with weight loss from 22 to 27.4 %. The second and third stages are attributed to lignin, hemicellulose, and cellulose decomposition, which have similar stabilities. According to literature, hemicellulose decomposes first, followed by lignin and cellulose, and there is not a certain region for the event of breakdown of these fractions [54, 55]. Thermal decomposition curve of untreated raw sugarcane bagasse. Conditions: 10 °C/min in air atmosphere alumina crucible (↓ exothermic peak) Thermal decomposition curve of raw sugarcane bagasse pretreated with lime for 30 min. Conditions: 10 °C/min in air atmosphere alumina crucible (↓ exothermic peak) As shown by the thermogravimetric analysis (TG/DTA), the distance between peaks related to the second and the third stage is smaller for pretreated samples than for untreated bagasse. These results suggest that lime pretreatment might have caused the decomposition of some components (cellulose, hemicellulose and lignin) of pretreated bagasse. This is in agreement with data obtained from compositional analysis of pretreated bagasse (Fig. 2), which showed that lime pretreatment promoted high delignification (30 % of solubilization) and also small hemicellulose degradation (5 % of solubilization) in pretreated bagasse. In this study, X-ray diffraction analyses were performed to evaluate the impact of cellulose crystallinity on the digestibility of lime-pretreated bagasse. Cellulose crystallinity has been considered a biomass recalcitrance feature that, along with specific surface area, polymerization degree, cellulose sheathing by hemicelluloses, lignin, and acetyl contents, affect enzymatic hydrolysis performance in pretreated bagasse [11]. Crystallinity is strongly influenced by biomass composition as a consequence of the relative amounts of lignin, hemicellulose, and cellulose, which vary according to the pretreatment applied to the biomass [15]. Figure 9 shows two peaks, one at 16° and another at 22° with full width at half maximum (FWHM). For bagasse pretreated with lime for 60 and 30 min, intensity of 300 and 285 cps with 61 % and 60 % of crystallinity, respectively, were obtained, while for untreated bagasse, peak intensity was 175cps with 43 % of crystallinity. These data indicate that lime pretreatment promoted an increase in the cellulose crystallinity degree (I c) of pretreated bagasse. Similar results were obtained by Chundawat et al. [56], who compared the effect of several pretreatments in the digestibility of lignocellulosic biomass. The authors observed that diluted acid, hydrothermal, steam explosion, and lime pretreatments generally result in relative increase of cellulose crystallinity compared to untreated control. According to Ishizawa et al. [57]. and Sheikh et al. [58], the increase in cellulose crystallinity was caused by the lignin removal, which exposed the crystalline cellulose core and increased the glucan content in the solid fraction of pretreated biomass. Other studies have also reported that the crystallinity degree increased slightly when amorphous components (such as lignin and hemicelluloses) were removed [12, 48, 59]. Then the increase in cellulose crystallinity obtained in the present study may be a consequence of the high delignification percentage (30 % of lignin solubilization) of bagasse pretreated with lime (Fig. 2). X-Ray diffraction analysis of raw sugarcane bagasse pretreated with lime for 30 min (LIME 30) and 60 min (LIME 60) Taking into account all results obtained in this present study, it could be inferred that they are consistent with the model proposed by Chang and Holtzapple [48]. According to this model, enzymes flow through pipes before reaching the substrate tank and the flow through each pipe is regulated by a large valve (lignin content). When the lignin valve is opened (i.e., most lignin is removed), enzymes can easily flow through the wide pipe and arrive at the substrate tank to be adsorbed on the substrate surface. In contrast, if the lignin valve is closed (i.e., none or little lignin is removed), enzymes can hardly flow through the wide pipe. After enzymes arrive at the substrate tank, they begin to work. How fast they work (i.e., enzyme effectiveness) depends on the substrate crystallinity. If the substrate is amorphous, enzyme effectiveness is high and enzymes are adsorbed on the substrate more rapidly. In contrast, if the substrate is highly crystalline, enzyme effectiveness is low and enzymes work slowly. In this model, the extent of enzymatic hydrolysis depends on two factors: how many enzymes arrive at the substrate tank and how fast they work. In the present study, our results showed that lime pretreatment promoted greater reduction in lignin content of raw sugarcane bagasse, allowing enough enzymes to reach carbohydrate polymers (cellulose), although they are not as effective due to the high substrate crystallinity, the amount of enzymes adsorbed on the substrate was sufficient to achieve high cellulose conversion (70 % digestion) after a 3-day period. From the results obtained in this study, it could be conclude that lime pretreatment was more efficient to promote greater digestibility rates of raw sugarcane bagasse (70 % cellulose digestion) compared with hydrothermal pretreatment (21 % cellulose digestion). This increase in the cellulose hydrolysis rate was mainly due to lignin and hemicellulose removal (30 and 5 % solubilization, respectively) and the increased cellulose content (11 % enrichment) of bagasse pretreated with lime. Analysis of pretreated bagasse structure revealed that lime pretreatment caused considerable damage in bagasse fibers, including rupture of the cell wall, exposing cellulose-rich areas to enzymatic action and consequently contributing to the high conversion rate. Comparing with literature, our results showed that it is possible to obtain high yield of fermentable sugars (384 mg glucose/g dry bagasse) using raw sugarcane bagasse pretreated with low lime loading (0.1 g Ca(OH)2/g dry bagasse) and shorter pretreatment time (60 min) at 120 °C. These results have a substantial importance for the production of cellulosic ethanol because the use of raw sugarcane bagasse (without prior screening) pretreated with lime (cheaper and environmentally friendly reagent) may represent a cost reduction in the bioconversion process. Fresh sugarcane bagasse was kindly provided by São Martinho sugar/ethanol plant (Pradópolis/SP-Brazil). It was dried at 60 °C to constant weight and kept into plastic bags in freezer. This biomass denominated raw bagasse was used as it comes from mill with different particle sizes (not passing through any step of screening), as shown in Fig. 10. Sample of raw sugarcane bagasse Lime and hydrothermal pretreatments Lime pretreatment was carried out as described by Chang, Nagwani, and Holtzapple [24]. In 500 mL flasks, raw bagasse (1 g dry weight) was treated with 100 mL of the calcium hydroxide solution (1 % w/v) in a ratio of 0.1 g lime per gram dry bagasse for 7, 30, and 60 min at 120 °C in an autoclave. For hydrothermal pretreatment, 100 mL of distilled water was added to raw bagasse (1 g dry weight) in 500 mL flasks and autoclaved under the conditions above. Subsequently, flasks were cooled at room temperature and the solid fraction (pretreated bagasse) was separated from hydrolysate by vacuum filtration, washed thoroughly with water to neutral pH and dried at 60 °C in an oven for 24 h. The dry weight obtained was used to determine pretreatment yield. All experiments were performed in duplicate. The chemical composition of untreated and pretreated raw sugarcane bagasse was determined according to analytical procedures established by NREL [25]. Raw bagasse samples (100 mg dry weight) were treated with 1 mL of sulfuric acid (72 % w/w) under vigorously stirring for 1 h at 30 °C. Thereafter, 84 mL of distilled water was added to the slurry and the mixture was kept at 120 °C for 1 h to complete oligosaccharides hydrolysis. After cooling, samples were filtered and the liquid phase was stored at −18 °C for subsequent analysis of total solids, ash, structural carbohydrates, and lignin. The concentration of polymeric sugars (cellulose and hemicellulose) was determined from the concentration of the corresponding monomeric sugars, using an anhydro correction of 0.88 for C-5 sugars (xylose and arabinose) and a correction of 0.90 for C-6 sugars (glucose, galactose, and mannose). The soluble lignin content present on liquid-phase samples was determined by measuring the absorbance at 240 nm on a UV–Visible spectrophotometer. For determination of insoluble lignin, the solid fraction was rinsed with water up to reaching neutral pH to remove acid residues and dried in oven at 105 °C until constant weight. Ash content was obtained by burning in muffle at 600 °C for 24 h. Total lignin was calculated as the sum of soluble and insoluble lignin fractions. Enzymatic hydrolysis Enzymatic hydrolysis of pretreated raw sugarcane bagasse (insoluble fiber) was performed according to standard analytical procedures (LAP) described by NREL [26], using commercially available enzymatic preparation (Accellerase-1500®) kindly provided by Genencor International (Rochester, NY, USA). The enzymatic blend consisted of cellulase (15 FPU/g substrate) and β-glucosidase (75 U/g substrate) and the activities of these enzymes were determined according to methods described by Ghose [27]. Pretreated raw sugarcane bagasse samples were hydrolyzed in 50 mmol/L citrate buffer (pH 4.8) at a solid:liquid ratio of 1:100 (w/v) supplemented with enzymes and sodium azide (40 mg/L) to inhibit microbial contamination. This mixture was incubated at 50 °C for 72 h on a rotary shaker (150 rpm). All assays were performed in duplicate for each indicated time (12, 24, 48, and 72 h). Hydrolysate samples were collected, boiled to deactivate enzymes and analyzed for glucose and total reducing sugars. The percentage of cellulose digestion was calculated by the ratio between the amount of cellulose digested and the amount of cellulose added, as shown in Eq. 1 [26]. $$ {\% }{\text{ Digestion}} = \frac{\text{Grams cellulose digested}}{\text{Grams cellulose added}} \times 100 $$ Sugar measurements Glucose and total reducing sugars were colorimetrically determined at 540 nm using GOD-PAP method [28] and DNS reagent [29], respectively. The xylose and arabinose concentration were colorimetrically measured at 671 nm using Bial's reagent, according to method described by Pham et al. [30]. Severity factor The severity factor log(R 0) was used to unify data obtained at different combinations of reaction time and pH of hydrothermal and lime pretreatments with respect to lignin solubilization. The factor R 0, incorporating an integration of the time period used in the pretreatment done at a certain temperature, was calculated by Eq. 2: $$ R_{0} = \int_{a}^{b} {\exp \left( {\frac{T(t) - 100}{14.75}} \right){\text{d}}t = t \times \exp } \left( {\frac{T(t) - 100}{14.75}} \right), $$ where t is the holding time of treatment in minutes, T(t) is the treatment temperature (in the time t), which 100 °C is the reference temperature [18]. The use of Eq (3) gives a more fair comparison of the pretreatment severities even at widely different pretreatment pH values [18]. $$ \log \left( {R_{0}^{{\prime \prime }} } \right) = \log \left( {R_{0} } \right) + \left| {{\text{pH}}\text{ - } 7} \right| $$ Scanning electron microscopy (SEM) The morphology of sugarcane bagasse before and after hydrothermal and lime pretreatments was examined by SEM. Samples were fixed with carbon tape on aluminum support ("stub") and submitted to metal plating with 10 nm of gold in a sputter. Photomicrographs were obtained on Equipment TOP WITH SM 300 marks a power of electron beam 20 kV. Various images were obtained on different areas of samples in order to assure reliable results. Thermogravimetry (TG/DTA) The thermogravimetric curves (TG) were obtained on a Netzsch termobalance, using alumina crucible, under the following conditions: heating rate of 10 °C min−1, temperature range 10–900 °C, oxidizing atmosphere with flow gas of 40 mL min−1. The derivative of TG curves (DTA) was obtained using TA analysis software. Crystallinity of sugarcane bagasse before and after pretreatment was analyzed by X-ray diffraction in a Siemens D5000 Diffractometer employing Co-Kα radiation. Scans were obtained from 5° to 20° 2θ (Bragg angle) at a 0.05° per second of scanning rate. Powder sample data were recorded at room temperature. The percentage of crystalline material in the biomass was expressed as the crystallinity index (Ic), which was calculated by Eq. 4, following the procedure proposed by Segal et al. [31]: $$ I_{\text{c}} = \frac{{\left( {I_{002} - I_{\text{am}} } \right)}}{{I_{002} }} \times 100 $$ in which I 002 is the intensity of the 002 peak (2θ = 22º) and I am is the intensity of the peak in the amorphous phase (2θ = 16º). Experimental design and data analysis A full factorial design with repetition was applied to evaluate the main and interaction effects of time (7, 30, and 60 min) and type (lime or hydrothermal) of pretreatments on the raw sugarcane bagasse composition and saccharification performance. The statistical significance of data was evaluated by analysis of variance (ANOVA) with confidence level of 95 %. The Minitab software version 15.0 (Minitab Inc., Pennsylvania) was used for the experimental design and for statistical analyses. LAP: laboratory analytical procedures NREL: national renewable energy laboratory GOD-PAP: glucose oxidase/peroxidase-phenol-4-aminophenazone DNS: dinitrosalicylic acid filter paper units enzyme unit Goldemberg J. The Brazilian biofuels industry. Biotechnol Biofuels. 2008;1:6. Soccol CR, de Souza Vandenberghe LP, Medeiros ABP, Karp SG, Buckeridge M, Ramos LP, et al. Bioethanol from lignocelluloses: status and perspectives in Brazil. Bioresour Technol. 2010;101(13):4820–5. Canilha L, Chandel AK, Milessi TSS, Antunes FAF, Freitas WLC, Felipe MGA, Silva SS. Bioconversion of sugarcane biomass into ethanol: an overview about composition, pretreatment methods, detoxification of hydrolysates, enzymatic saccharification, and ethanol fermentation. J Biomed Biotechnol. 2012;2012:1–15. Macrelli S, Mogensen J, Zacchi G. Techno-economic evaluation of 2nd generation bioethanol production from sugar cane bagasse and leaves integrated with the sugar-based ethanol process. Biotechnol Biofuels. 2012;5(1):1–18. Gupta A, Verma JP. Sustainable bio-ethanol production from agro-residues: a review. Renew Sustain Energy Rev. 2015;41:550–67. Pereira SC, Maehara L, Machado CMM, Farinas CS. 2G ethanol from the whole sugarcane lignocellulosic biomass. Biotechnol Biofuels. 2015;8(1):44. Pandey A, Soccol CR, Nigam P, Soccol VT. Biotechnological potential of agro-industrial residues. I: sugarcane bagasse. Bioresour Technol. 2000;74(1):69–80. Balat M. Production of bioethanol from lignocellulosic materials via the biochemical pathway: a review. Energy Convers Manag. 2011;52(2):858–75. Sarkar N, Ghosh SK, Bannerjee S, Aikat K. Bioethanol production from agricultural wastes: an overview. Renewable Energy. 2012;37(1):19–27. Hendriks A, Zeeman G. Pretreatments to enhance the digestibility of lignocellulosic biomass. Bioresour Technol. 2009;100(1):10–8. Kumar P, Barrett DM, Delwiche MJ, Stroeve P. Methods for pretreatment of lignocellulosic biomass for efficient hydrolysis and biofuel production. Ind Eng Chem Res. 2009;48(8):3713–29. Alvira P, Tomás-Pejó E, Ballesteros M, Negro M. Pretreatment technologies for an efficient bioethanol production process based on enzymatic hydrolysis: a review. Bioresour Technol. 2010;101(13):4851–61. Laluce C, Schenberg A, Gallardo J, Coradello L, Pombeiro-Sponchiado S. Advances and developments in strategies to improve strains of Saccharomyces cerevisiae and processes to obtain the lignocellulosic ethanol: a review. Appl Biochem Biotechnol. 2012;166(8):1908–26. Mood SH, Golfeshan AH, Tabatabaei M, Jouzani GS, Najafi GH, Gholami M, et al. Lignocellulosic biomass to bioethanol, a comprehensive review with a focus on pretreatment. Renew Sustain Energy Rev. 2013;27:77–93. Pu Y, Hu F, Huang F, Davison BH, Ragauskas AJ. Assessing the molecular structure basis for biomass recalcitrance during dilute acid and hydrothermal pretreatments. Biotechnol Biofuels. 2013;6:15. Singh R, Shukla A, Tiwari S, Srivastava M. A review on delignification of lignocellulosic biomass for enhancement of ethanol production potential. Renew Sustain Energy Rev. 2014;32:713–28. Chiaramonti D, Prussi M, Ferrero S, Oriani L, Ottonello P, Torre P, et al. Review of pretreatment processes for lignocellulosic ethanol production, and development of an innovative method. Biomass Bioenergy. 2012;46:25–35. Pedersen M, Meyer AS. Lignocellulose pretreatment severity–relating pH to biomatrix opening. New Biotechnol. 2010;27(6):739–50. Galbe M, Zacchi G. Pretreatment: the key to efficient utilization of lignocellulosic materials. Biomass Bioenergy. 2012;46:70–8. Saha BC, Yoshida T, Cotta MA, Sonomoto K. Hydrothermal pretreatment and enzymatic saccharification of corn stover for efficient ethanol production. Ind Crops Prod. 2013;44:367–72. Laser M, Schulman D, Allen SG, Lichwa J, Antal MJ, Lynd LR. A comparison of liquid hot water and steam pretreatments of sugar cane bagasse for bioconversion to ethanol. Bioresour Technol. 2002;81(1):33–44. Rabelo SC. Maciel Filho R, Costa AC. A comparison between lime and alkaline hydrogen peroxide pretreatments of sugarcane bagasse for ethanol production. Appl Biochem Biotechnol. 2008;148(1–3):45–58. Rabelo SC, Maciel Filho R, Costa AC. Lime pretreatment of sugarcane bagasse for bioethanol production. Appl Biochem Biotechnol. 2009;153(1–3):139–50. Chang VS, Nagwani M, Holtzapple MT. Lime pretreatment of crop residues bagasse and wheat straw. Appl Biochem Biotechnol. 1998;74(3):135–59. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton S, Cricker D. Determination of structural carbohydrates and lignin in biomass. In Laboratory Analytical Procedure (LAP). 1617 Cole Boulevard, Colorado: National Renewable Energy Laboratory NREL; 2008. Selig M, Weiss N, Ji Y. Enzymatic saccharification of lignocellulosic biomass. In: Laboratory analytical procedure (LAP). 1617 Cole Boulevard, Colorado: National Renewable Energy Laboratory NREL; 2008. Ghose T. Measurement of cellulase activities. Pure Appl Chem. 1987;59(2):257–68. Trinder P. Determination of glucose in blood using glucose oxidase with an alternative oxygen acceptor. Ann Clin Biochem. 1969;6:24–7. Miller GL. Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem. 1959;31(3):426–8. Pham PJ, Hernandez R, French WT, Estill BG, Mondala AH. A spectrophotometric method for quantitative determination of xylose in fermentation medium. Biomass Bioenergy. 2011;35(7):2814–21. Segal L, Creely J, Martin A, Conrad C. An empirical method for estimating the degree of crystallinity of native cellulose using the X-ray diffractometer. Text Res J. 1959;29(10):786–94. Playne M. Increased digestibility of bagasses by pretreatment with alkalis and steam explosion. Biotechnol Bioeng. 1984;26(5):426–33. Fuentes LL, Rabelo SC. Maciel Filho R, Costa AC. Kinetics of lime pretreatment of sugarcane bagasse to enhance enzymatic hydrolysis. Appl Biochem Biotechnol. 2011;163(5):612–25. Rabelo SC. Maciel Filho R, Costa AC. Lime pretreatment and fermentation of enzymatically hydrolyzed sugarcane bagasse. Appl Biochem Biotechnol. 2013;169(5):1696–712. Canilha L, Santos VT, Rocha GJ, e Silva JBA, Giulietti M, Silva SS et al. A study on the pretreatment of a sugarcane bagasse sample with dilute sulfuric acid. J Indust Microbiol Biotechnol. 2011;38(9):1467–75. Chang VS, Burr B, Holtzapple MT. Lime pretreatment of switchgrass. Appl Biochem Biotechnol. 1997;63:3–19. Chang VS, Nagwani M, Kim C-H, Holtzapple MT. Oxidative lime pretreatment of high-lignin biomass. Appl Biochem Biotechnol. 2001;94(1):1–28. Mosier N, Wyman C, Dale B, Elander R, Lee Y, Holtzapple M, et al. Features of promising technologies for pretreatment of lignocellulosic biomass. Bioresour Technol. 2005;96(6):673–86. Saha BC, Cotta MA. Lime pretreatment, enzymatic saccharification and fermentation of rice hulls to ethanol. Biomass Bioenergy. 2008;32(10):971–7. Gupta R, Lee Y. Investigation of biomass degradation mechanism in pretreatment of switchgrass by aqueous ammonia and sodium hydroxide. Bioresour Technol. 2010;101(21):8185–91. Park YC, Kim JS. Comparison of various alkaline pretreatment methods of lignocellulosic biomass. Energy. 2012;47(1):31–5. Chen Y, Stevens MA, Zhu Y, Holmes J, Xu H. Understanding of alkaline pretreatment parameters for corn stover enzymatic saccharification. Biotechnol Biofuels. 2013;6:8. Cardona C, Quintero J, Paz I. Production of bioethanol from sugarcane bagasse: status and perspectives. Bioresour Technol. 2010;101(13):4754–66. Hatfield R, Fukushima RS. Can lignin be accurately measured? Crop Sci. 2005;45(3):832–9. Galbe M, Zacchi G. Pretreatment of lignocellulosic materials for efficient bioethanol production. Adv Biochem Eng/Biotechnol. 2007;108:41–65. Agbor VB, Cicek N, Sparling R, Berlin A, Levin DB. Biomass pretreatment: fundamentals toward application. Biotechnol Adv. 2011;29(6):675–85. Pedersen M, Viksø-Nielsen A, Meyer AS. Monosaccharide yields and lignin removal from wheat straw in response to catalyst type and pH during mild thermal pretreatment. Process Biochem. 2010;45(7):1181–6. Chang VS, Holtzapple MT. Fundamental factors affecting biomass enzymatic reactivity. Appl Biochem Biotechnol. 2000;84–86:5–37. Lee YH, Fan L. Kinetic studies of enzymatic hydrolysis of insoluble cellulose: analysis of the initial rates. Biotechnol Bioeng. 1982;24(11):2383–406. Rezende CA, de Lima MA, Maziero P, Ribeiro deAzevedo E, Garcia W, Polikarpov I. Chemical and morphological characterization of sugarcane bagasse submitted to a delignification process for enhanced enzymatic digestibility. Biotechnol Biofuels. 2011;4:54. Xu J, Cheng JJ, Sharma-Shivappa RR, Burns JC. Lime pretreatment of switchgrass at mild temperatures for ethanol production. Bioresour Technol. 2010;101(8):2900–3. Xu J, Cheng JJ. Pretreatment of switchgrass for sugar production with the combination of sodium hydroxide and lime. Bioresour Technol. 2011;102(4):3861–8. Chen Y, Sun L, Negulescu II, Moore MA, Collier BJ. Evaluating efficiency of alkaline treatment for waste bagasse. J Macromol Sci Part B Phys. 2005;44(3):397–411. Mothé CG, de Miranda IC. Characterization of sugarcane and coconut fibers by thermal analysis and FTIR. J Therm Anal Calorim. 2009;97(2):661–5. Bernabé GA, Almeida S, Ribeiro C, Crespi M. Evaluation of organic molecules originated during composting process. J Therm Anal Calorim. 2011;106(3):773–8. Chundawat SP, Beckham GT, Himmel ME, Dale BE. Deconstruction of lignocellulosic biomass to fuels and chemicals. Ann Rev Chem Biomol Eng. 2011;2:121–45. Ishizawa CI, Davis MF, Schell DF, Johnson DK. Porosity and its effect on the digestibility of dilute sulfuric acid pretreated corn stover. J Agric Food Chem. 2007;55(7):2575–81. Sheikh MMI, Kim C-H, Park H-J, Kim S-H, Kim G-C, Lee J-Y, et al. Alkaline Pretreatment Improves Saccharification and Ethanol Yield from Waste Money Bills. Biosci Biotechnol Biochem. 2013;77(7):1397–402. Kim S, Holtzapple MT. Effect of structural features on enzyme digestibility of corn stover. Bioresour Technol. 2006;97(4):583–91. MPG carried out the pretreatments of raw sugarcane bagasse and the determination of chemical composition and structural analysis of pretreated bagasse, participated in the statistical analysis of data, and drafted the manuscript. MPM performed the enzymatic hydrolysis experiments of the pretreated bagasse, participated in the statistical analysis of data, and drafted the manuscript. CL participated in the design of the study and helped to revise the manuscript. EMC participated in the sugar analysis and helped to draft the manuscript. SRPS designed research and coordinated the overall study, participated in the interpretation of the results, and revised the manuscript. All authors read and approved the final manuscript. The authors thank FAPESP (São Paulo Research Foundation) for the financial support for this study by Bioenergy Research Program (Process No. 2008/56247-6) and would like to thank CNPq (National Counsel of Technological and Scientific Development) and CAPES (Coordination for Improvement of Higher Education Personnel) for the scholarship granted to the authors MPG and MPM, respectively. Department of Biochemistry and Technology Chemistry, Institute of Chemistry, São Paulo State University-UNESP, R. Prof. Francisco Degni 55, Araraquara, SP, CEP 14800–060, Brazil Maira Prearo Grimaldi , Marina Paganini Marques , Cecília Laluce , Eduardo Maffud Cilli & Sandra Regina Pombeiro Sponchiado Search for Maira Prearo Grimaldi in: Search for Marina Paganini Marques in: Search for Cecília Laluce in: Search for Eduardo Maffud Cilli in: Search for Sandra Regina Pombeiro Sponchiado in: Correspondence to Sandra Regina Pombeiro Sponchiado. Grimaldi, M.P., Marques, M.P., Laluce, C. et al. Evaluation of lime and hydrothermal pretreatments for efficient enzymatic hydrolysis of raw sugarcane bagasse. Biotechnol Biofuels 8, 205 (2015) doi:10.1186/s13068-015-0384-y Lime pretreatment Hydrothermal pretreatment
CommonCrawl
Search SpringerLink Winter severity and snowiness and their multiannual variability in the Karkonosze Mountains and Jizera Mountains Grzegorz Urban1, Dáša Richterová2, Stanislava Kliegrová3, Ilona Zusková4 & Piotr Pawliczek5 Theoretical and Applied Climatology volume 134, pages 221–240 (2018)Cite this article This paper analyses winter severity and snow conditions in the Karkonosze Mountains and Jizera Mountains and examines their long-term trends. The analysis used modified comprehensive winter snowiness (WSW) and winter severity (WOW) indices as defined by Paczos (1982). An attempt was also made to determine the relationship between the WSW and WOW indices. Measurement data were obtained from eight stations operated by the Institute of Meteorology and Water Management – National Research Institute (IMGW–PIB), from eight stations operated by the Czech Hydrological and Meteorological Institute (CHMI) and also from the Meteorological Observatory of the University of Wrocław (UWr) on Mount Szrenica. Essentially, the study covered the period from 1961 to 2015. In some cases, however, the period analysed was shorter due to the limited availability of data, which was conditioned, inter alia, by the period of operation of the station in question, and its type. Viewed on a macroscale, snow conditions in the Karkonosze Mountains and Jizera Mountains (in similar altitude zones) are clearly more favourable on southern slopes than on northern ones. In the study area, negative trends have been observed with respect to both the WSW and WOW indices—winters have become less snowy and warmer. The correlation between the WOW and WSW indices is positive. At stations with northern macroexposure, WOW and WSW show greater correlation than at ones with southern macroexposure. This relationship is the weakest for stations that are situated in the upper ranges (Mount Śnieżka and Mount Szrenica). Among the basic research problems in modern climatology is that of establishing climate trends and variability, including in the winter period, through the analysis of winter severity and snowiness. Snow cover and its multiannual trends are the result of the direct or indirect simultaneous impact of multiple climate components and factors and of their changes in subsequent years (Foster et al. 1983; Falarz 2004). Any changes in snow cover thickness and retention time may have long-lasting environmental and economic implications (Beniston 1997, 2000; Beniston et al. 2003). Economic losses related to snow cover deficits are of particular importance in mountain areas where a significant proportion of tourism income is derived from winter sports (Beniston 2000). The climate changes predicted to take place until 2050, i.e. progressive warming, will cause a gradual decrease in the surface area of glaciers and in snow cover; the snow line and vegetation zones will move upwards and northwards (IPCC 2001, 2013; Migała 2005). It is estimated that in mountain regions, an average increase in air temperatures of 1 °C may be accompanied by an upward shift in the snow line of around 150 m (Haeberli and Beniston 1998). The results of snow cover studies confirm its high sensitivity to climate change and to individual climate components (Cayan 1996; Bednorz 2004; Stewart et al. 2005) as well as to progressive warming (Karl et al. 1993; Dettinger and Cayan 1995; Stewart et al. 2004; Hidalgo et al. 2009). This topic is particularly salient, because the increase in mean annual air temperatures is most affected by the temperature of winter months (Hess 1974; Trepińska 1976; Wibig and Głowicki 2002; Piotrowicz 2006; Rebetez and Reinhard 2008; Kliegrová et al. 2009). The progressive warming results in shorter winters and earlier spring thaws (Bednorz 2004; Migała et al. 2016). The phenomenon of snowy or severe winters is of interest to many people and institutions, e.g. in connection with the considerable damage these can wreak in many economies (Beniston et al. 2003; Bednorz 2008, 2013; Kulasová and Bubeníčková 2009). The issue of winter snowiness and severity is a complex one, and it is insufficient to apply just a single criterion to its assessment (Piasecki 1995; Piotrowicz 2004; Mayes Boustead et al. 2015). Hence, several criteria are usually used simultaneously to classify and assess winters (Domonkos and Piotrowicz 1998; Piotrowicz 2006). The overwhelming majority of research deals with the severity of winters in terms of temperature (Obrębka-Starklowa et al. 1995; Piotrowicz 1997; Twardosz and Kossowska-Cezak 2016), and to a lesser extent with winter snowiness and snow cover variability in the mountains (Jackson 1977; Piasecki 1995; Falarz 2000-2001; Lapin et al. 2007; Príbullová et al. 2009; Urban 2016). Far fewer articles discuss winter temperatures and snow conditions at the same time (Paczos 1982, 1985; Niedźwiecki 1998; Janasz 2000; Majewski et al. 2011). Describing the winter characteristics in the Karkonosze Mountains and Jizera Mountains is a difficult task due to the considerable variability of weather conditions in the temperate transitional climate zone in which both mountain ranges are located. Hence, there is considerable intraseasonal and long-term variability between winters (Majewski et al. 2011). In turn of interseasonal variability of weather conditions is a typical feature of the Karkonosze Mountains climate and is determined primarily by the zonal atmospheric circulation that forms over the North Atlantic. This is clearly illustrated by the fact that temperature conditions on Mount Śnieżka are dependent on the North Atlantic Oscillation (NAO) index (Sobik et al. 2014; Migała et al. 2016). The purpose of this study is to analyse the severity and snowiness of winters and to examine their long-term trends in the Karkonosze Mountains and Jizera Mountains on the Polish and Czech sides of both mountain ranges. Additionally, attempts have been made to determine the relationship between winter severity and snowiness. Among the motivations for conducting research on this theme was the fact that to date, literature on the subject lacks a comprehensive, methodologically uniform study that would cover such a long period and use a larger number of stations in the Western Sudetes on both sides of the Polish-Czech border. The studies already published have concerned the comparison of snow conditions on both sides of the border in the context of skiing opportunities (Urban and Richterová 2010) or were case studies (Ojrzyńska et al. 2010; Urban et al. 2011), but no analysis of long-term trends or assessment of winter conditions was offered. Source data and methods The measurement data used in this paper were obtained from meteorological stations of the Institute of Meteorology and Water Management – National Research Institute (IMGW–PIB), from the Czech Hydrological and Meteorological Institute (CHMI) and from the Meteorological Observatory of the University of Wrocław (UWr) on Mount Szrenica (Fig. 1; Table 1). The source material consisted of daily mean, minimum, and maximum air temperature values, and daily depths of snow cover on the ground at 06.00 UTC. The basic research period included 55 consecutive winter seasons (winters) from the 1961/1962–2015/2016 multiannual period. In some cases, the period analysed was shorter (Table 1). This resulted from the availability of data, which was conditioned by, inter alia, the operation of the station in question, changes in station type (climatological, rain gauge) and thus the number of meteorological parameters measured. The choice of stations was primarily guided by their suitability for assembling long data series and the completeness of their measurement sequences. In addition, the stations were selected so that there were no significant changes in their locations and so that they represented the different altitude zones present in the Karkonosze Mountains and Jizera Mountains. Locations of the measurement stations used in the study. Station name abbreviations as per Table 1 Table 1 Characteristics of weather stations The authors wanted to document the distinct snow and temperature conditions on the southern and northern slopes of the Karkonosze Mountains and Jizera Mountains as reflected by winter severity and snowiness as accurately as possible. For this reason, all the reliable materials available have been used, starting from 1961. Nevertheless, winter severity and snowiness indices were calculated for a uniform period consisting of the 30 consecutive seasons from 1981/1982 to 2010/2011, and measurements from fewer stations were used. It is worth stressing that the results obtained were similar to those for the periods of different lengths (from 15 to 55 years) described in Table 1. The calculations included 9 out of 13 stations in the case of the winter severity index and 11 out of 16 stations in the case of the winter snowiness index. Studies covering 30-year periods form the basis for determining the climate characteristics of individual areas, and the 1981–2010 multiannual period is currently recommended for analysis purposes by the World Meteorological Organisation and is the so-called standard or reference period (WMO 2011). A similar approach, using data series of different lengths, was applied to snow cover studies in Austria (Hantel et al. 2000) and in the mountains of Bulgaria (Brown and Petkova 2007). This approach is consistent with the accepted view that even mean 10-year values of individual meteorological parameters can be used to capture the spatial diversity of climate and make it possible to detect general patterns that govern the phenomenon in question (Hess 1965; Hess et al. 1980). Other authors report that the optimal period for reliably determining mean air temperature for a station that has a data series significantly shorter than 30 years ranges from 5 to 15 years (Huang et al. 1996) or from 5 to 20 years for extreme temperature values (Srivastava et al. 2003). Huang et al. (1996) stated that if mean values are to be determined, the exact length of the period adopted for this purpose within the 10–30 year range is not significant. In turn, Sansom and Tait (2004) believe that using shorter data series in local studies of temperature and precipitation fields significantly improves the quality of the results obtained for the study area compared to the use of simple spatial interpolation. Using the example of stations located on the northern and southern slopes of the Alps, Marty (2008) demonstrated that when analysing long-term mean values, there is considerable similarity between snow cover characteristics at stations located at comparable altitudes. This is present despite the possible lack of homogeneity caused by the changes in station locations and the different characteristics of their locations. In the case of snow cover, there are unlikely to be any breaks in the homogeneity of the multiannual measurement series. This is due to the simplicity of the measurement and the fact that observation timing and methods have not changed since World War II (Falarz 2006). A change in station location appears to be the only threat to the homogeneity of such series (Falarz 2006). The snow cover measurement methodology was uniform at all the stations analysed and was in line with the guidelines for the National Hydrological and Meteorological Service of the Institute of Meteorology and Water Management (IMGW) (Janiszewski 1988). A similar measurement methodology is used at the CHMI (Lipina et al. 2014). In the paper, stations whose location changed significantly were treated as stations with two separate measurement series and were labelled accordingly, e.g. Karpacz_1 and Karpacz_2 or Benecko_1 and Benecko_2 (Table 1). Therefore, the snow cover data series used should be considered homogeneous. Small gaps in data series that were present for some stations were eliminated using the arithmetic means method on the basis of measurement data from the nearest stations that represented similar climatic and morphological conditions. Since IMGW–PIB and CHMI calculate mean daily air temperatures in different fashions, a uniform method was used. Namely, the mean daily temperature (T) was determined for all stations according to the following formula: T = (Tmax + Tmin)/2, where: Tmax: maximum daily air temperature [°C]; Tmin: minimum daily air temperature [°C]. This method was used successfully in previous studies of trends in air temperature (Migała et al. 2016; Urban and Tomczyński 2017). The characteristics of various methods for determining mean air temperatures, and the differences between them were presented by Urban (2010, 2013). In the study, winter was assumed to last from 1 November to 30 April of the following year. This definition of winter stemmed from the fact that during the aforementioned months, weather conditions typical of this season, i.e. snow cover and negative air temperatures, are present in the mountains, as well as from the method adopted for defining winter. The only exception was that analyses of the start and end dates for snow cover were performed from 1 August of year X to 31 July of year X + 1. This was due to the fact that in the upper range of the Karkonosze Mountains (Mount Śnieżka—1603 m a.s.l.; Mount Szrenica—1331 m a.s.l.), snow cover on the ground sometimes even occurs in summer months. This approach has already been applied before (Falarz 2000-2001; Urban 2015, 2016). A day with snow cover was assumed to be a day on which snow depth measured at 6.00 UTC was not less than 1 cm and snow covered at least 50% of the area. This limitation made it possible to eliminate days on which snow fell but conditions were not conducive to snow cover forming. This condition mainly applies to the period of snow cover formation and disappearance. The adoption of this assumption made it possible to clearly define characteristic periods (number of days with snow cover and potential period) and their start and end dates. Winter temperature and snowiness have been described in terms of the fundamental climatological characteristics of these meteorological parameters. A synthesis of those characteristics was presented by Paczos (1982) for the climatic conditions prevailing in Poland, including in mountain areas, in the form of indices describing winter severity and snowiness (from December to March). These indices have been used in this paper with minor modifications. The winter severity index (WOW) formula developed by Paczos (1982) has the following form: $$ \mathrm{WOW}=\left(1-0.25\times \mathrm{Tw}\right)\times 0.8325+0.0144\times \mathrm{NDw}+0.0087\times \mathrm{NDf}+0.0045\times \mathrm{NDvf}-0.0026\times \mathrm{ST} $$ WOW: winter severity index; Tw: mean winter air temperature (°C); NDw: number of winter days (with mean daily temperature ≤ 0 °C); NDf: number of ice days (with maximum temperature < 0 °C); NDvf: number of very frosty days (with minimum temperature < − 10 °C); sum of mean daily temperature values < 0 °C. Values of the winter severity index range from 0 to 10, where 0 denotes the mildest winter and 10 the coldest one. In the formula above, the modification proposed by Janasz (2000) was introduced. The modification of Paczos's (1982) formula consisted of defining very frosty days as days with a maximum temperature (rather than minimum temperature as suggested by Paczos) below − 10 °C. Majewski et al. (2011) had previously made a similar decision. In addition, a winter day was assumed to be a day with a mean temperature lower than 0 °C rather than lower than or equal to 0 °C (as proposed by Paczos). The changes adopted are in line with the current definition of a very frosty day and of a winter day (Niedźwiedź et al. 2003; ČMeS 2016). The winter snowiness index (WSW) was also calculated using a formula proposed by Paczos (1982): $$ \mathrm{WSW}=0.0328\times \mathrm{NDSC}+0.0246\times \mathrm{NDSC}20+0.00012\times \mathrm{SSC} $$ WSW: winter snowiness index ranging from 0 to 10; NDSC: number of days with snow cover ≥ 1 cm deep; NDSC20: number of days with snow cover > 20 cm deep; SSC: sum of snow cover depths in cm. In the formula above, a modification was introduced to change the sign before the threshold value from > 20 to ≥ 20 cm. Whereas in the winter severity formula, the assumption that winter lasts from November to April rather than from December to March as in the original (Paczos 1982) does not change the number of classes (0–10), this is not the case with the snowiness formula. Namely, extending the original winter period by 2 months increases the number of classes to 15. This results from the fact that the winter snowiness formula contains the sum of snow cover thickness (depth) during the period analysed. Therefore, the addition of two more months (November and April) results in an increase in the number of classes by 50%. Using average multiannual values of the winter severity (WOWavg) and snowiness (WSWavg) indices and their standard deviations (δ), three groups of winters have been distinguished both in terms of severity and snowiness. From the point of view of their severity, winters have been divided into: frosty (WOW ≥ WOWavg + δ); moderately frosty (WOWavg − δ < WOW < WOWavg + δ); mild (WOW ≤ WOWavg − δ), while from the point of view of snowiness, they have been divided into: snowy (WSW ≥ WSWavg + δ); moderately snowy (WSWavg − δ < WSW < WSWavg + δ); with little snow (WSW ≤ WSWavg − δ). The boundaries between the groups have been determined separately for each station (see Tables 2 and 4). Table 2 Values of the winter severity index and its standard deviation (δ), coefficient of variation (CV), and frequency of severe and mild winters (N) The division presented made it possible to transform an absolute classification into a relative one. Thus, it enabled a comparison to be made between individual winters at a single measurement station throughout the entire period. A similar approach to winter classification based on average multiannual values of these indices and their standard deviations was used in earlier work (Paczos 1982; Janasz 2000; Majewski et al. 2011; Twardosz and Kossowska-Cezak 2016). For stations with at least a 30-year measurement data series, trends in winter severity, and snowiness were also determined. Student's t test was used to verify the statistical significance of those trends at the p < 0.05 significance level. A comprehensive index that combined, inter alia, air temperature, total snowfall, snow cover depth, and the duration of the winter season has been successfully applied in studies of winter severity and associated trends in the US (Mayes Boustead et al. 2015). Winter severity Average values of the WOW winter severity index in the study area range from about 6–7 (moderate winters) in the upper ranges of the Karkonosze Mountains (Mount Śnieżka, Mount Szrenica) to around 2 (winters with little snow) at the foot of the Jizera Mountains (Hejnice). As concerns extreme values, the WOW index at all the stations analysed ranged from 0.0 (for the 2006/2007 season in Hejnice) to 9.2 (for the 1969/1970 season on Mount Śnieżka). In general, the distribution of WOW index values correlates with the absolute altitude of each station and is directly proportional to it (Fig. 2). The higher the WOW index value, the more severe the winter in temperature terms. However, there are some deviations from this general pattern that result from the local landform and the location of the station in question. An example is the Jelenia Góra station, which is located in a large mid-mountain basin and is characterised by frequent and intense air temperature inversions (Głowicki 1970; Hess et al. 1980). On 10 February 1956, in a Stevenson screen situated 2 m above ground level at that station, the absolute minimum air temperature recorded at meteorological stations in Poland in the post-World War II period was measured at − 36.9 °C (Kuziemska 1983). This is the reason why the WOW index for that station is significantly higher than indicated by its altitude above sea level (Figs. 2-3). The WOW value for Jelenia Góra is close to WOW values for stations situated 200 to 300 m higher (Świeradów Zdrój, Karpacz_2, or Szklarska Poręba). This means that in this case, the change in air temperature as a function of altitude above sea level is considerably distorted at locations that are situated close to one another. In this case, thermal conditions at the station in question are affected more by the landform than by its exposure to the sun (or lack thereof). Average (Avg), maximum (Max), and minimum (Min) values of the winter severity index (WOW) at the stations analysed The relationship between station altitude (H) and the average WOW index value for southern (S) and northern (N) macroexposures The influence of solar radiation and exposure and also of the landform on air temperatures in the Polish Sudetes as reflected by different values of vertical temperature gradient in different seasons was described by Schmuck (1969). This issue was also noted in later studies that described the topoclimate of the Jizera Mountains (Sobik and Urban 2000; Urban 2002) or the climate of the Karkonosze Mountains (Sobik et al. 2014). Also of note are the relatively higher WOW index values at CHMI stations that are situated around 670 m a.s.l. (Harrachov, Vysoké nad Jizerou) when compared to the stations situated on the northern (Polish) side of the mountains at similar altitudes (Szklarska Poręba). The difference in favour of CHMI stations amounts to approximately one WOW class on average (Figs. 2-3). In this case, the differentiating factor is the exposure of the station in question to the advections of air masses from the south, southwest, and west that prevail in winter (S, SW, and W macroexposure). For example, the share of winds from the S, SW, and W directions for the November–April period in the years 1961–1990 on Mount Śnieżka (the highest peak in the Sudetes) was 55.4% in total (12.3, 23.0, and 20.1%, respectively) (Głowicki 1995). Slope exposure to circulation from the aforementioned directions, which prevails in winter, exerts a clear influence on determining the winter temperature and precipitation field on the Karkonosze Mountains, as demonstrated by Sobik et al. (2014). Apart from atmospheric circulation, the orientation of the main ridge of the Karkonosze Mountains (along the WNW-ESE axis) is a secondary factor affecting the thermal characteristics of air masses around this massif. During advection from the southwest or southern directions, the stations located on the Polish (leeward) side of the Karkonosze Mountains are subject to the warming effect of catabatic foehn winds (Kwiatkowski 1972, 1975, 1979). The air moving downslope on the leeward side of the mountains is heated adiabatically. Thus, the foehn contributes, inter alia, to raising mean winter temperatures at stations on leeward (northern) slopes compared to stations on the windward (southern) slopes at the same altitudes. During the winter months, at altitudes ranging from 600 to 800 m a.s.l., the northern slopes of the Karkonosze Mountains and Jizera Mountains are around 0.5–1.0 °C warmer than the southern slopes (Sobik et al. 2009). The difference in mean air temperatures in winter between stations on opposite sides of the mountain ridge increases together with the difference in altitude between the ridge and the station in question. Owing to the influence of foehn winds and its location at the bottom of a large basin, Jelenia Góra experiences alternating waves of freezing and thawing of varying durations and exhibiting thermal anomalies to various degrees (Głowicki 1993). The relationships observed for various periods at the stations analysed, which have been described above and presented graphically (Figs. 2-3), are confirmed by the results obtained for the 1981–2010 period at selected stations (methodological justification presented in Chapter 2). The average WOW index values (illustrating climate patterns) at stations for the period 1981–2010 are almost identical with those for the periods presented in Table 1. The same is true for minimum index values as those occurred in the 2001–2010 period at most stations. Only in a few cases were the maximum WOW values for the 1961–2015 period slightly higher than that for 1981–2010, since the maximum values were recorded in the 1960s (Fig. 4). Average (Avg), maximum (Max), and minimum (Min) values of the winter severity index (WOW) at selected stations Temporal and spatial differences and trends The winter severity calendar, developed on the basis of multiannual severity index averages (WOWavg) and standard deviations, demonstrates that moderately frosty winters predominated at all the stations analysed. In addition, since the late 1980s, the winters' thermal characteristics have changed compared to earlier decades of the twentieth century. The prevalence of mild winters has increased, while frosty winters have become less frequent. The pattern found is consistent with the results obtained for Europe, where in the second half of the 1951–2010 period, an increase in the frequency of exceptionally mild winters was observed at the expense of exceptionally cold ones (Twardosz and Kossowska-Cezak 2016). In the last two or three decades, an increase in the frequency of warm winters has also been recorded in the Swiss Alps (Scherrer et al. 2004; Marty 2008) and the USA (Mayes Boustead et al. 2015). In the last 10 years, frosty winters were virtually absent (except for a few locations with northern exposure). The warmest winters within the altitude profile found in the Karkonosze Mountains and Jizera Mountains were 1988/1989–1989/1990, 2006/2007, and 2013/2014–2015/2016 (Table 2). The winter of 2006/2007 was extremely warm, with the WOW index reaching its lowest values at all stations (WOW ≤ WOWavg – 2 × δ)—from around 0.0 at the foot of the mountains to just 5.0 in the upper ranges of the Karkonosze Mountains. The winter of 2006/2007 was exceptionally warm in most parts of Europe (Luterbacher et al. 2007; Twardosz and Kossowska-Cezak 2016). Winters with high WOW indices at most stations were 1962/1963, 1968/1969–1969/1970, 1984/1985–1986/1987, 1995/1996, and 2005/2006. The highest WOW values noted in those winters (from 4.0 in Hejnice to 9.2 on Mount Śnieżka) were recorded in 1962/1963 and 1995/1996. During those winters, the highest WOW values for the multiannual period analysed were observed at all individual stations (Table 2). High WOW index values were present in the winter of 1995/1996 at most stations in Poland (Olba-Zięty and Grabowski 2007), whereas the winter of 1962/1963 was exceptionally cold in most of Europe (Twardosz and Kossowska-Cezak 2016). The variability of the WOW index as expressed by its standard deviation ranges from approx. 0.8–0.9 at the stations situated at the highest altitudes above sea level to approx. 1.1–1.2 in the lower zones. The dispersion of WOW values is much better illustrated by the coefficient of variation (CV), which is the ratio of the standard deviation to the mean expressed as a percentage. CV values indicate that the greatest variation in the WOW index is present at the foot of the mountains and the smallest in their upper ranges. CV is inversely proportional to a station's absolute altitude (Table 2). The upper ranges of the Karkonosze Mountains have thermal characteristics that are more similar to an oceanic climate compared to the lower zones; this manifests itself in smaller air temperature amplitudes in individual months of the year (Sobik et al. 2014), which in turn translates into lower variability of the WOW index. Trends in the WOW index calculated for all stations analysed in the Karkonosze Mountains and Jizera Mountains are negative (Table 3). This means that WOW values gradually declined, thus indicating a progressive increase in air temperatures and in individual thermal indicators (WOW components), and consequently, climate warming. Rates of decline of the WOW index ranged from approx. − 0.1/10 years in Hejnice to approx. − 0.4/10 years on Mount Szrenica. The average WOW decline rate for all stations amounted to − 0.27/10 years. WOW trends are statistically significant at the 0.05 significance level for 7 out of the 11 stations selected for the purposes of analysing thermal conditions (Table 3). Similar negative trends, which are mostly statistically insignificant at the 0.05 level, have been observed for many thermal characteristics of the cold season in Central Europe (Domonkos and Piotrowicz 1998). In the period 1881–2010 in the upper ranges of the Karkonosze Mountains, a decline was observed in the annual number of days with mean daily air temperatures below 0.0 °C and also in the number of extremely cold months, with a simultaneous increase in the number of very warm months in individual decades. This has been especially noticeable since the early 1990s (Migała et al. 2016). The higher frequency of mild winters may be related to the change in atmospheric circulation macrotypes over the North Atlantic. In the last decades of the twentieth century, an increase in cyclonic zonal circulation from the southwest was observed (Migała 2005; Migała et al. 2016). In Western and Central Europe, this type of circulation generates an extensive zone of weather fronts with heavy clouds, and also brings warming in the cold season and precipitation. Table 3 Trend magnitude, correlation coefficient (R), and statistical significance of the WOW and WSW indices Winter snowiness Average values of the WSW in the Karkonosze Mountains and Jizera Mountains range from approx. 2.5–3.0 (winters with little snow) in the lowest locations to approx. 10.0–10.5 (moderately snowy winters) in the upper ranges. With extreme values, the situation is similar. The lowest values of approx. 0.6 are observed in the locations with the lowest altitudes above sea level (Jelenia Góra, Hejnice), and the highest ones of approx. 13.0 at the stations situated at the highest altitudes (Mount Śnieżka, Mount Szrenica). The significant difference in altitude, exceeding 1250 m (from Jelenia Góra, which lies in a basin, to the highest peak—Mount Śnieżka), and the variety of landforms and station exposures result in a clear spatial variation of WSW index values. The average WSW index values show greater variation (Figs. 4-5) than average WOW index values (Fig. 2). Average (Avg), maximum (Max), and minimum (Min) values of the winter snowiness index (WSW) at the stations analysed The spatial variability of snowiness in winter results from multiple factors, including altitude above sea level and the orographic deformation of the field through which air masses flow that shape precipitation and air temperatures. As a result of this deformation, there is a clear contrast in snow conditions between the catchment of the Elbe River (S, SW, and W macroexposure) relative to the catchment of the Odra River (NE and N macroexposure). Areas with SW and W macroexposures are characterised by longer snow cover retention times and greater snow cover depths. On average, the snow conditions prevailing in the slope zone in the Elbe River catchment are assessed at around 2–3 WSW classes higher than those on the northern slope of the mountains in the Odra River catchment. This contrast is especially clear within the 500–900 m a.s.l. range, which is evidenced, inter alia, by average WSW index values for the following stations: Harrachov versus Szklarska Poręba, Przesieka or Horní Maršov versus Karpacz_2 (Figs. 5-6). The difference in winter snowiness between slopes with southern macroexposure compared to slopes with northern macroexposure corresponds to an altitude difference of around 250 m. Thus, the snow conditions typical of the slope zone in the Elbe River catchment are present in the Odra River catchment at altitudes that are approx. 250 m higher. This result is consistent with the earlier studies of snow cover in the Western Sudetes (Sobik et al. 2009; Urban and Richterová 2010) and of temperature and precipitation variations in the Karkonosze Mountains (Sobik et al. 2014). In upper mountain ranges, the differences are blurred owing to the frequent alternating movement of snow, which is carried by the wind from one side of the massif to the other. The relationship between station altitude (H) and the average WSW index value for southern (S) and northern (N) macroexposures The fact that the main differences in precipitation at comparable absolute altitudes in the Karkonosze Mountains are not present in the east-west direction but rather between the southern and northern sides of the mountains was noted by Sobik et al. (2014). Sobik emphasised the increase in winter precipitation in the upper catchment of the Kamienna River in the western part of the Polish Karkonosze. This effect is caused by the weakness of descending air currents during southern circulation, which results from the relatively low altitude of the Main Ridge of the Karkonosze Mountains to the west of Mount Szrenica and the presence of the only slightly lower Upper Ridge of the Jizera Mountains that lies parallel to the Karkonosze Mountains and hinders the flow of foehn currents (Kwiatkowski 1985). Similarly, Falarz (2002) demonstrated a greater effect of the meridional component of atmospheric circulation as compared to the zonal one and also the significant role of the foehn effect in the determining of nival conditions and their variability in the Polish Tatra Mountains. The decrease in the number of days with snow cover observed at stations below 1300 m a.s.l. in the Swiss Alps at the end of the twentieth century was also due to the increase in air temperature and macrocirculation associated with the North Atlantic Oscillation (NAO). NAO is the factor that accounts for the difference in numbers of days with snow cover between the northern and southern slopes of the Alps (Scherrer et al. 2004). The relationships observed for various periods at the stations analysed, which have been described above and presented graphically (Figs. 5-6), are confirmed by the results obtained for the 1981–2010 period at selected stations (methodological justification presented in Chapter 2). The average WSW index values (illustrating climate patterns) noted for stations for the 1981–2010 period are, just as in the case of the WOW index, almost identical with those for the periods presented in Table 1. Only in a few cases were the maximum and minimum WSW index values for the period presented in Table 1 slightly higher or lower, respectively, than in the 1981–2010 period, since the maximum values were recorded in the 1960s and the minimum ones in the years 2001–2010 (Fig. 7). Average (Avg), maximum (Max), and minimum (Min) values of the winter snowiness index (WSW) at selected stations Temporal and spatial differences and trends of the winter snowiness index The winter snowiness calendar developed on the basis of the multiannual snowiness index averages (WSWavg) and standard deviations demonstrates that moderately snowy winters predominated at all the stations analysed. In addition, the winters' thermal characteristics have changed since the late 1980s and early 1990s. The prevalence of winters with little snow has increased everywhere except for Mount Śnieżka while snowy winters have become less frequent. This was particularly pronounced in the last 10 years when virtually no snowy winters were present. Winters with little snow in the Karkonosze Mountains and Jizera Mountains were 1989/1990–1990/1991, 1997/1998, 2000/2001, 2006/2007, and 2013/2014–2015/2016 (Table 4). During the winter of 2006/2007, extremely little snow was present, with the WSW index reaching the lowest values at nearly all stations (WSW ≤ WSWavg – 2 × δ)—from around 0.6–0.7 at the foot of the mountains to 8.7 in the upper ranges of the Karkonosze Mountains. The conditions were very similar during the winters of 2013/2014 and 2015/2016, which at the same time were also exceptionally warm winters. The decrease in depth of snow cover and its retention time since the late 1980s has also been characteristic of stations situated at lower altitudes in the Alps (Laternser and Schneebeli 2003; Scherrer et al. 2004; Marty 2011). That trend was noticeable at stations both on the northern and southern slopes of the Swiss Alps (Marty 2008). The decline could have been associated with extremely warm winters in the last two or three decades (Scherrer et al. 2004; Marty 2008), some of which, as with the 2006/2007 winter, appear to have been unique in Europe in the last 500 years (Luterbacher et al. 2007). Table 4 Values of the winter snowiness index and its standard deviation (δ), coefficient of variation (CV), and frequency of snowy winters and winters with little snow (N) Winters with high WSW indices (snowy winters) at most stations were 1966/1967, 1969/1970, 1975/1976, 1981/1982, 1995/1996, and 2004/2005–2005/2006. Among the aforementioned winters, the highest WSW values (from 4.0–5.0 at the foot of the mountains to approx. 12.0 on Mount Śnieżka) were recorded in 1995/1996 and 2005/2006. During those winters, the highest WSW values for the multiannual period analysed were observed at nearly all individual stations (Table 4). High WSW index values were present in the winter of 1995/1996 at most stations in Poland (Olba-Zięty and Grabowski 2007) as well as in the Czech Republic (Němec and Zusková 2005). Occasionally, snowy winters only occur in the upper part of the altitude profile (1974/1975) or only in its bottom part (1962/1963). Except for the lowest stations with northern macroexposure, the winter of 1978/1979, which is referred to in Poland as "the winter of the century", did not make its mark (Majewski et al. 2011). The variability of the WSW index as expressed by its standard deviation ranges from approx. 1.0–1.1 at the stations situated at the lowest altitudes above sea level through approx. 1.4–1.7 in the upper ranges of the mountains to approx. 1.9–2.1 at slope stations. The dispersion of WSW values is better illustrated by the CV expressed as a percentage. It also allows for WSW comparisons to be made within a time series and also between stations. The smallest values (approx. 15%) are present in the upper ranges, i.e. snow cover is the most stable there. The highest values of CV (approx. 45%) are present at the lowest stations located at the foot of the mountains (Table 4). Thus, the coefficient of variation for the WSW index is inversely proportional to altitude above sea level. This result is consistent with earlier conclusions concerning the coefficient of variation of snow cover parameters in the Polish Tatras (Falarz 2000-2001) and in the Polish Sudetes (Urban 2015, 2016). Nevertheless, considerable differences in CV values for the WSW index are present within the broad slope zone. This is particularly pronounced in the 600–700 m a.s.l. zone where stations with southern macroeconomic exposure (Harrachov, Vysoké nad Jizerou) exhibit significantly lower WSW variability (approx. 25%) than stations with northern macroeconomic exposure (Szklarska Poręba, Przesieka) (approx. 40%). Trends in the WSW index calculated for all stations analysed in the Karkonosze Mountains and Jizera Mountains are negative, just like those for the WOW index (Table 3). This means that WSW values decreased steadily, pointing to reduced snow cover retention times and depths (both parameters are components of WSW) during subsequent winters. This confirms the results of earlier studies on snow cover on Mount Śnieżka in the 1901–2000 period (Głowicki 2005) and in the Polish Sudetes together with their foreland in the 1951–2007 period (Urban 2015, 2016). Rates of decline of the WSW index ranged from approx. − 0.04/10 years on Mount Śnieżka to approx. − 0.38/10 years in Jakuszyce. The average decline rate for all stations amounted to − 0.25/10 years. WSW trends are statistically significant at the 0.05 significance level for just 4 out of the 13 stations selected for the purposes of analysing snowiness (Table 3). In the second half of the twentieth century, a slight downward trend in snow cover parameters was observed in most of Poland. Changes in snow cover are related to changes in atmospheric circulation, and in particular to the increased prevalence of advection of air masses from the western sector (Falarz 2004). Similarly, recent research from the Swiss Alps demonstrates that there was a downward trend in all snow cover parameters in the 1970–2015 period irrespective of the location of the station. For example, the downward trend in retention time averaged 8.9 days/10 years, and with respect to maximum thickness, it amounted to 10%/10 years on average. The shortened retention of snow cover in that area results primarily from its earlier disappearance (during spring thaws caused by higher spring temperatures) rather than its later appearance (Klein et al. 2016). The direct cause of the decrease in the number of days with snow cover is the long-term change in air temperature and precipitation (Falarz 2004; Marty 2008). This mechanism may be confirmed in the first place by the pronounced upward trends in air temperatures in winter, both in Poland (Kożuchowski and Żmudzka 2001; Wibig and Głowicki 2002) and in Europe (Schönwiese and Rapp 1997). This fact is closely linked to the increase in the frequency of western circulation over Poland (Ustrnul 1998). The increase in winter temperatures may reduce the share of snowfall in overall precipitation, which exhibited an upward trend in the cold half of the year in most regions of Poland in the period from 1930 to 1980 (Kożuchowski 1985). Relationship between the winter severity index and the winter snowiness index The analysis of very similar temporal trends in the WOW and WSW indices appears to reveal a relationship between them. This applies to stations with both southern and northern macroexposures (Fig. 8). On this basis, and owing to the relatively high correlation coefficient (ca. 0.8), conclusions on the relationship between the WOW and WSW indices have been drawn in the Biebrza River valley (Olba-Zięty and Grabowski 2007) and in Warsaw (Majewski et al. 2011). Winter severity (WOW) and snowiness (WSW) indices charted alongside their trend lines (bold black line) and simple regression equations at selected stations with southern (left column) and northern (right column) macroexposures However, when the WOW-WSW correlation is presented in the form of an XY chart, the apparently close association is no longer as pronounced. Depending on the station, correlation coefficients range from 0.3–0.4 in the upper range of the mountains to ca. 0.85–0.87 at low-altitude stations. There is also a clear variation in the correlation coefficient (R) between the WOW and WSW indices depending on the macroexposure and altitude above sea level (Fig. 9). Namely, stations with southern macroexposure (e.g. Bedřichov, Harrachov) exhibit markedly lower R values than stations with northern macroexposure (e.g. Karpacz, Świeradów Zdrój, Jelenia Góra). Thus, at stations with northern macroexposure on the slopes of the Karkonosze Mountains and Jizera Mountains, WOW and WSW values are more correlated than at stations with southern macroexposure. This means that on the northern slopes, an increase in WOW index values almost always results in an increase in WSW index values, while on the southern slopes, this pattern is much less frequent. In addition, the range of WSW index variability on the southern slopes is higher than on the northern slopes. However, the correlation is positive, and the relationship is directly proportional in both cases. The situation is slightly different for stations that are situated in the upper ranges (Mount Śnieżka and Mount Szrenica) where this relationship is the weakest. This is caused by the high prevalence of strong and very strong winds, which blow snow from one side of the mountain massif to the other. For example, average annual wind speeds on Mount Śnieżka and Mount Szrenica reach 12.2 and 9.5 m/s, respectively. Thus, the upper ranges of the Karkonosze Mountains are among the most windy locations in continental Europe (Głowicki 1995; Sobik et al. 2014). The average annual prevalence of very strong winds (with speed of over 15 m/s) on Mount Śnieżka is 61%, and in January, it is as high as 76% (Głowicki 1995). Relationship between winter severity (WOW) and snowiness (WSW) indices charted alongside their trend lines (bold black line) and simple regression equations at selected stations with southern (left column) and northern (right column) macroexposures Snow cover studies in the Swiss Alps have demonstrated that there appears to be no clear-cut relationship between snow cover retention time and air temperature. The nature of this relationship depends on the type of winter. Milder winters are usually associated with higher precipitation than colder ones (Beniston et al. 2003). Thus, in spite of the considerable variability of snow cover in subsequent years and the increase in air temperatures, average long-term climatic conditions in the Swiss Alps in the twentieth century were conducive to long snow cover retention times. This is due, inter alia, to the fact that mild winters are associated with an increase in snowfall in high locations, and rain in lower ones (Laternser and Schneebeli 2003). Nevertheless, any change in one or both of these factors may lead to significant changes in snow cover retention time (Beniston et al. 2003). The indirect influence of temperature conditions, which affect type (solid, liquid) of precipitation, on snow cover in the Carpathians was pointed out by Obrębka-Starklowa et al. (1995). Earlier, Bultot et al. (1994) demonstrated that a decrease in snowfall retention time with increase in air temperature was highest at altitudes lower than 600 m a.s.l. The analysis of temperature and snow conditions in the Karkonosze Mountains and Jizera Mountains, which was conducted on the basis of modified multicomponent formulas proposed by Paczos (1982), enables us to put forward the following claims: the orientation of the main ridge of the Karkonosze Mountains (along the WNW-ESE axis) and of the Jizera Mountains, which are oblique to the Karkonosze ridge, in relation to the predominant western circulation in winter, is an important factor affecting the thermal characteristics of air masses around these massifs; Stations located on the northern side of the mountains, below the ridge zone, are subject to the warming effect of foehn winds during the cold season. This effect increases together with the difference in altitude between the ridge and the station in question; In winter, stations in the lower and middle parts of slopes with northern macroexposure are, on average, one WOW class warmer than stations with southern macroexposure located at comparable altitudes above sea level; The spatial distribution of the average WOW index value is related to the absolute altitude of the station both on the northern and southern sides of the mountains; Trends in the WOW winter severity index calculated for all stations analysed in the Karkonosze Mountains and Jizera Mountains are negative. The decreasing WOW values indicate a progressive increase in air temperatures and in individual thermal indicators (WOW components), and consequently, climate warming. Moderate winters predominate, but in recent decades, the frequency of mild winters has increased; Southern and western (windward) slope macroexposure is conducive to better snow conditions than those on slopes with northern macroexposure, which are leeward in relation to the prevailing winter circulation from the west and southwest; the spatial variability of the average WSW in the Karkonosze Mountains and Jizera Mountains results from multiple factors, including altitude above sea level and the orographic deformation of the field through which air masses flow. As a result of this deformation, there is a clear contrast in snow conditions between the catchment of the Elbe River (SW and W macroexposure) relative to the catchment of the Odra River (NE and N macroexposure). Areas with SW and W macroexposures exhibit better snow conditions. This contrast is particularly noticeable in the 500–900 m a.s.l. range. The snow conditions typical of the slope zone in the Elbe River catchment are present in the Odra River catchment at altitudes that are approx. 250 m higher up. In the upper mountain ranges, the differences are blurred owing to the frequent alternating movement of snow, which is carried by the wind from one side of the massif to the other. Average WSW index values exhibit much greater spatial variability than WOW index values; Trends in the WSW index at all stations analysed were negative and in most cases insignificant at the 0.05 significance level; Moderately snowy winters predominate, but in recent decades, the frequency of winters with little snow has increased; Variation in the WOW and WSW indices is inversely proportional to the station's altitude; The meridional circulation component is much more important than the zonal one in determining the temperature and snowiness of winters in the study area; The trends in winter temperature and snowiness as expressed by the WOW and WSW indices are in line with the trends observed in other mountain areas of Poland and Europe as evidenced by the literature on the subject cited in this paper; The correlation between the WOW and WSW indices is positive (directly proportional); At stations with northern macroexposure, the WOW and WSW indices are more strongly correlated than at ones with southern macroexposure. This relationship is the weakest at stations that are situated in the upper ranges (Mount Śnieżka and Mount Szrenica). This is caused by the high prevalence of very strong winds, which blow snow from one side of the mountain massif to the other. The results of this research may provide the basis for further studies on snow cover and on temperature conditions in mountain areas in Poland, and are in line with the trends observed in other European mountain ranges. Bednorz E (2004) Snow cover in eastern Europe in relation to temperature, precipitation and circulation. Int J Climatol 24:591–601. https://doi.org/10.1002/joc.1014 Bednorz E (2008) Synoptics reason for heavy snowfalls in the Polish–German lowlands. Theor Appl Climatol 92:133–140. https://doi.org/10.1007/s00704-007-0322-4 Bednorz E (2013) Synoptic conditions of heavy snowfalls in Europe. Geogr Ann Ser A 95:67–78. https://doi.org/10.1111/geoa.12001 Beniston M (1997) Variations of snow depth and duration in the Swiss Alps over the last 50 years: links to changes in large-scale climatic forcings. Clim Chang 36:281–300. https://doi.org/10.1007/978-94-015-8905-5_3 Beniston M (2000) Environmental change in mountains and uplands. Arnold, London, pp 172 Beniston M, Keller F, Goyette S (2003) Snow pack in the Swiss Alps under changing climatic conditions: an empirical approach for climate impacts studies. Theor Appl Climatol 74:19–31. https://doi.org/10.1007/s00704-002-0709-1 Brown RD, Petkova N (2007) Snow cover variability in Bulgarian mountainous regions, 1931–2000. Int J Climatol 27:1215–1229. https://doi.org/10.1002/joc.1468 Bultot F, Gellens D, Schädler B, Spreafico M (1994) Effect of climate change on snow accumulation and melting in the Broye catchment (Switzerland). Clim Chang 28:339–363. https://doi.org/10.1007/bf01104078 Cayan DR (1996) Interannual climate variability and snow-pack in the Western United States. J Clim 9:928–948. https://doi.org/10.1175/1520-0442(1996)009<0928:icvasi>2.0.co;2 ČMeS (2016) Meteorologický slovník výkladový a terminologický (eMS). version eMS 1.4 (3/2016) http://slovnik.cmes.cz. Accessed 22 May 2017 Dettinger MD, Cayan DR (1995) Large-scale atmospheric forcing of recent trends toward early snowmelt runoff in California. J Clim 8(3):606–623. https://doi.org/10.1175/1520-0442(1995)008<0606:lsafor>2.0.co;2 Domonkos P, Piotrowicz K (1998) Winter temperature characteristics in Central Europe. Int J Climatol 18:1405–1417. https://doi.org/10.1002/(sici)1097-0088(19981115)18:13<1405::aid-joc323>3.0.co;2-d Falarz M (2000-2001) Zmienność wieloletnia występowania pokrywy śnieżnej w polskich Tatrach (Long-term variability of snow cover in polish part of Tatra Mountains). Folia Geogr Seria: Geogr-Phys 31–32:101–123 (in Polish) Falarz M (2002) Klimatyczne przyczyny zmian i wieloletniej zmienności występowania pokrywy śnieżnej w polskich Tatrach (The climatic causes of changes and long-term variability in the snow cover of the Polish Tatra Mountains). Przegl Geogr 74(1):83–107 (in Polish) Falarz M (2004) Variability and trends in the duration and depth of snow cover in Poland in the 20th century. Int J Climatol 24(13):1713–1727. https://doi.org/10.1002/joc.1093 Falarz M (2006) Wykrywanie i korekta niejednorodności wieloletnich serii niwalnych (Detection and correction of inhomogeneity in the long-term snow cover series). Ann Univ Mariae Curie–Sk LXI 18:155–163 (in Polish) Foster J, Owe M, Rango A (1983) Snow cover and temperature relationships in North America and Eurasia. J Clim Appl Meteorol 22:460–469. https://doi.org/10.1175/1520-0450(1983)022<0460:scatri>2.0.co;2 Głowicki B (1970) O niektórych cechach mikroklimatu Kotliny Jelenigórskiej (Some aspects of microclimate of Jelenia Góra valley). Rocz Jeleniogórski 8:147–160 (in Polish) Głowicki B (1993) Zmienność termicznych pór roku w Karkonoszach (Variability of the thermal seasons in the Karkonosze Mountains). In: Tomaszewski J (ed) Geoekologiczne Problemy Karkonoszy (Geoecological problems of the Karkonosze Mountains). Materiały z sesji naukowej w Karpaczu 11–13.X.1991, pp 21–28 (in Polish) Głowicki B (1995) Klimat Śnieżki (Climate of Mt. Śnieżka). In: Dubicki A, Głowicki B (eds) Wysokogórskie Obserwatorium Meteorologiczne na Śnieżce. Biblioteka Monitoringu Środowiska, Wrocław, pp 37–64 (in Polish) Głowicki B (2005) Klimat Karkonoszy (Climate of the Karkonosze Mountains). In: Mierzejewski M (ed) Karkonosze – przyroda nieożywiona i człowiek. Wydawnictwo UWr, Wrocław, pp 381–397 (in Polish) Haeberli W, Beniston M (1998) Climate change and its impacts on glaciers and permafrost in the Alps. Ambio 27:258–265 Hantel M, Ehrendorfer M, Haslinger A (2000) Climate sensitivity of snow cover duration in Austria. Int J Climatol 20:615–640. https://doi.org/10.1002/(sici)1097-0088(200005)20:6<615::aid-joc489>3.0.co;2-0 Hess M (1965) Piętra klimatyczne w polskich Karpatach Zachodnich (Vertical climatic zones in the Polish Western Carpathians). Zeszyty Naukowe UJ. Pr Geol 11:1–262 (in Polish) Hess M (1974) Klimat Krakowa (Climate of Cracow). Folia Geogr Seria: Geogr-Phys 8:45–102 (in Polish) Hess M, Niedźwiedź T, Obrębka-Starklowa B (1980) O prawidłowościach piętrowego zróżnicowania stosunków klimatycznych w Sudetach (On regularities in zonal differentiation of the climatic conditions in the Sudety Mountains). Rocznik Naukowo-Dydaktyczny Wyższej Szkoły Pedagogicznej w Krakowie 71:167–201 (in Polish) Hidalgo HG, Das T, Dettinger MD, Cayan DR, Pierce DW, Barnett TP, Bala G, Mirin A, Wood AW, Bonfils C, Santer BD, Nozawa T (2009) Detection and attribution of streamflow timing changes to climate change in the Western United States. J Clim 22(13):3838–3855. https://doi.org/10.1175/2009jcli2470.1 Huang J, van den Dool HM, Barnston AG (1996) Long-lead seasonal temperature prediction using optimal climate normals. J Clim 9:809–817. https://doi.org/10.1175/1520-0442(1996)009<0809:llstpu>2.0.co;2 IPCC (2001) Climate change. The IPCC Third Assessment Report, vol Volumes I (Science), II (Impacts and Adaptation) and III (Mitigation Strategies). Cambridge University Press, Cambridge IPCC (2013) Climate change. The IPCC Fifth Assessment Report, vol Volumes I (Science). Cambridge University Press, Cambridge Jackson MC (1977) A classification of the snowiness of 100 winters–a tribute to the late LCW Bonacina. Weather 32(3):91–98. https://doi.org/10.1002/j.1477-8696.1977.tb04523.x Janasz J (2000) Warunki termiczne i śnieżne zim w Lublinie (1960/61–1994/95) (Thermic and snow conditions of winters in Lublin (1960/61–1994/95)). Acta Agrophysica 34:71–78 (in Polish) Janiszewski F (1988) Instrukcja dla stacji meteorologicznych (Instruction for weather stations). Wydawnictwa Geologiczne, Warszawa 264 pp (in Polish) Karl TR, Groisman PY, Knight RW, Heim RR (1993) Recent variations of snow cover and snowfall in North America and their relation to precipitation and temperature variations. J Clim 6:1327–1344. https://doi.org/10.1175/1520-0442(1993)006<1327:rvosca>2.0.co;2 Klein G, Vitasse Y, Rixen C, Marty C, Rebetez M (2016) Shorter snow cover duration since 1970 in the Swiss Alps due to earlier snowmelt more than to later snow onset. Clim Change. https://doi.org/10.1007/s10584-016-1806-y Kliegrová S, Metelka L, Materna J (2009) Mění se klima Krkonoš? (Is climate of the Karkonosze mountains changing?). Krkonoše – Jizerské hory 3:24–25 (in Czech) Kożuchowski K (1985) Zmienność opadów atmosferycznych w Polsce w stuleciu 1881–1980 (Variation in precipitation in the years 1881–1980 in Poland). Acta Geogr Lodziensia 48:1–158 (in Polish) Kożuchowski K, Żmudzka E (2001) Ocieplenie w Polsce: skala i rozkład sezonowy zmian temperatury powietrza w drugiej połowie XX wieku (The warming in Poland: the range and seasonality of the changes in air temperature in the second half of 20th century). Przegl Geofizyczny 46(1–2):81–90 (in Polish) Kulasová A, Bubeníčková L (2009) Podnebí a počasí Jizerských hor. In: Karpaš R et al (eds) Jizerské hory, O mapách, kamení a vodě. Nakladatelství RK, Liberec, pp 344–371 (in Czech) Kuziemska D (1983) O zakresie zmienności temperatury powietrza w Polsce (On the range of the air temperature variability in Poland). Przegl Geofizyczny 3–4:329–343 (in Polish) Kwiatkowski J (1972) Feny w Kotlinie Jeleniogórskiej (Foehns in Jelenia Góra valley). Acta Univ Wratislav 173:3–46 (in Polish) Kwiatkowski J (1975) Zasięg fenów sudeckich i ich wpływ na mezoklimat regionów południowo-zachodniej i środkowej Polski (The Reach of Sudeten Foehns and their influence on the Mezoclimate of southwestern and central regions of Poland). Przegl Geofizyczny 20(28):1 15–30 (in Polish) Kwiatkowski J (1979) Zjawiska fenowe w Sudetach i na przedpolu Sudetów (Foehn phenomena in the Sudetes and in the Sudetian foreland). Probl Zagosp Ziem Górs 20:243–280 (in Polish) Kwiatkowski J (1985) Szata śnieżna, szadź i lawiny (Snow cover, Rime and Avalanches). In: Jahn A (ed) Karkonosze Polskie (The Polish Karkonosze Mountains). Wrocław, pp 117–144 (in Polish) Lapin M, Faško P, Pecho J (2007) Snow cover variability and trends in the Tatra Mountains in 1921–2006. In: Proceedings of the 29th International Conference on Alpine Meteorology Chambéry. France 4–6 June 2007 Laternser M, Schneebeli M (2003) Long-term snow climate trends of the Swiss Alps (1931–99). Int J Climatol 23:733–750. https://doi.org/10.1002/joc.912 Lipina P, Kain I, Žídek D (2014) Návod pro pozorovatele automatizovaných meteorologických stanic. Metodický předpis č. 2. ČHMÚ, Praha (in Czech) Luterbacher J, Liniger MA, Menzel A, Estrella N, Della-Marta PM, Pfister C, Rutishauser T, Xoplaki E (2007) Exceptional European warmth of autumn 2006 and winter 2007: historical context, the underlying dynamics, and its phonological impacts. Geophys Res Lett 34:L12704. https://doi.org/10.1029/2007GL029951 Majewski G, Gołaszewski D, Przewoźniczuk W, Rozbicki T (2011) Warunki termiczne i śnieżne zim w Warszawie w latach 1978/79–2009/10 (Thermal and snow conditions of winters in Warsaw 1978/79–2009/10). Prace i Studia Geogr 47:147–155 (in Polish) Marty C (2008) Regime shift of snow days in Switzerland. Geophys Res Lett 35:L12501. https://doi.org/10.1029/2008GL033998 Marty C (2011) Snow cover changes in the alps. In: Singh VP, Singh P, Haritashya UK (eds) Encyclopedia of snow, ice and glaciers. Part of the series Encyclopedia of Earth Sciences Series. Springer Netherlands, Netherlands, pp 1036–1038. https://doi.org/10.1007/978-90-481-2642-2_612 Mayes Boustead BE, Hilberg SD, Shulski MD, Hubbard KG (2015) The accumulated winter season severity index (AWSSI). J Appl Meteor Climatol 54:1693–1712. https://doi.org/10.1175/JAMC-D-14-0217.1 Migała K (2005) Piętra klimatyczne w górach Europy a problem zmian globalnych (Climatic belts in the European Mountains and the issue of global changes). Studia Geograficzne 78, Wrocław, pp 149 (in Polish) Migała K, Urban G, Tomczyński K (2016) Long-term air temperature variation in the Karkonosze mountains according to atmospheric circulation. Theor Appl Climatol 125:337–351. https://doi.org/10.1007/s00704-015-1468-0 Němec L, Zusková I (2005) Změny sněhové pokrývky v České republice od roku 1926 (Changes in snow cover in the Czech Republic since 1926). Meteorologické Zprávy (Meteorol Bull) 58(5):135–138 (in Czech) Niedźwiecki M (1998) Charakterystyka pokrywy śnieżnej w Łodzi w latach 1950–1989 (The charakteristic of snow cover in Łódź in the period 1950–1989). Acta Univ Lodz 3:265–277 Niedźwiedź T, Bąbka M, Borkowski J, Cebulak E, Czekierda D, Dziewulska-Łosiowa A, Falarz M et al (2003) In: Kossowska-Cezak U, Niedźwiedź T, Paszyński J (ed) Słownik meteorologiczny (Glossary of meteorology). IMGW, Warszawa, pp 495 Obrębka-Starklowa B, Bednarz Z, Niedźwiedź T, Trepińska J (1995) On the trends of the climatic changes in the higher parts of the Carpathian Mountains. Zeszyty Naukowe UJ. Prace Geogr 95:123–151 Ojrzyńska H, Błaś M, Kryza M, Sobik M, Urban G (2010) Znaczenie lasu oraz morfologii terenu w rozwoju pokrywy śnieżnej w Sudetach Zachodnich na przykładzie sezonu zimowego 2003/2004 (The role of forest and terrain morphology in snow cover development in the Western Sudety – 2003/2004 winter season case study). Sylwan 154(6):412–428 (in Polish) Olba-Zięty E, Grabowski J (2007) Warunki termiczne i śnieżne zim doliny Biebrzy w latach 1980/1981–2004/2005 (Thermal and snowy condition of Winters in Biebrza Valley during 1980/81–2004/2005). Acta Agrophysica 10(3):625–634 (in Polish) Paczos S (1982) Stosunki termiczne i śnieżne zim w Polsce (Thermal conditions and snowiness of Winters in Poland). Rozprawy hab., UMCS, Lublin, p 24 (in Polish) Paczos S (1985) Zagadnienie klasyfikacji zim w świetle różnych kryteriów termicznych (Classification of Winters in the Light of Various Thermic Criteria). Ann UMCS 40(7):133–155 (in Polish) Piasecki J (1995) Pokrywa śnieżna na Szrenicy w latach 1960–1990 i klasyfikacja śnieżności zim (The snow cover on the Szrenica Mountain during 1960–1990). Acta UWr Prace Inst Geogr Ser C 1705:23–57 (in Polish) Piotrowicz K (1997) Thermal differentiation of winters in the Carpathian Mountains altitudinal profile during the period 1961/62–1990/91. Geogr Pol 70:89–100 Piotrowicz K (2004) Temperatura okresu zimowego jako wskaźnik zmian klimatu (Winter period Temperature as a Climate change index). In: Haladyn K, Mikłaszewski A, Radomski R, Ropuszyński P, Wojtyszyn B (ed) Klimat–środowisko–człowiek (Climate-environment-man). Polski Klub Ekologiczny, Wrocław, pp 23–31 (in Polish) Piotrowicz K (2006) Kryteria wyznaczania ekstremalnych zim (Extreme winters setting criterions). Ann Univ Mariae Curie-Sk LXI 42:362–369 Príbullová A, Pecho J, Bíčárová S (2009) Analysis of snow cover at selected meteorological stations in the High Tatra Mountains. In: Príbullová A and Bíčárová S (ed) Sustainable Development and Bioclimate. Reviewed Conference Proceedings. Stará Lesná:56–57 Rebetez M, Reinhard M (2008) Monthly air temperature trends in Switzerland 1901–2000 and 1975–2004. Theor Appl Climatol 91:27–34. https://doi.org/10.1007/s00704-007-0296-2 Sansom J, Tait A (2004) Estimation of long-term climate information at locations with short-term data records. J Appl Meteorol 43:915–923. https://doi.org/10.1175/1520-0450(2004)043<0915:EOLCIA>2.0.CO;2 Scherrer SC, Appenzeller C, Laternser M (2004) Trends in Swiss Alpine snow days: the role of local and large scale climate variability. Geophys Res Lett 31:L13215. https://doi.org/10.1029/2004GL020255 Schmuck A (1969) Klimat Sudetów (The climate of the Sudetes). Probl Zagosp Ziem Górs 5(18):93–153 (in Polish) Schönwiese CD, Rapp J (1997) Climate trend atlas of Europe based on observations 1891–1990. Kluwer Academic Publishers, Dordrecht Sobik M, Błaś M, Migała K, Godek M, Nasiółkowski T (2014) Klimat (Climate). In: Knapik R, Raj A (ed) Przyroda Karkonoskiego Parku Narodowego (The nature of the Karkonosze National Park). Jelenia Góra, pp 147–186 (in Polish) Sobik M, Urban G (2000) Warunki termiczne zlewni Kamionka w Górach Izerskich (Thermal conditions of the Kamionek Catchment in the Izerskie Mountains). Acta UWr 2269(LXXIV):143–157 (in Polish) Sobik M, Urban G, Błaś M, Kryza M, Tomczyński K (2009) Uwarunkowania zalegania pokrywy śnieżnej w Sudetach Zachodnich w sezonach zimowych 2001/2002–2005/2006 (Determinants of snow cover in the winter seasons in Western Sudety Mountains during 2001/2002–2005/2006). Wiad Meteorol Hydrol Gosp Wodnej 2–3:31–47 (in Polish) Srivastava AK, Guhathakurta P, Kshirsagar SR (2003) Estimation of annual and seasonal temperatures over Indian stations using optimal normals. Mausam 54:615–622 Stewart IT, Cayan DR, Dettinger MD (2004) Changes in snowmelt runoff timing in Western North America under a 'Business as Usual' Climate Change Scenario. Clim Chang 62(1–3):217–232. https://doi.org/10.1023/B:CLIM.0000013702.22656.e8 Stewart IT, Cayan DR, Dettinger MD (2005) Changes toward earlier streamflow timing across Western North America. J Clim 18(8):1136–1155. https://doi.org/10.1175/jcli3321.1 Trepińska J (1976) Mild winters in Cracow against the background of the contemporary circulation processes. Geogr Pol 33:97–105 Twardosz R, Kossowska-Cezak U (2016) Exceptionally cold and mild winters in Europe (1951–2010). Theor Appl Climatol 125:399–411. https://doi.org/10.1007/s00704-015-1524-9 Urban G (2002) Warunki termiczne obszarów mrozowiskowych Gór Izerskich i ich wpływ na wzrost lasu (The thermal conditions of frost areas of Izera Mountains and their influence on growth forest). Dissertation, University of Wrocław, pp 165 (in Polish) Urban G (2010) Ocena wybranych metod obliczania średniej dobowej, miesięcznej i rocznej wartości temperatury powietrza (na przykładzie Sudetów Zachodnich i ich przedpola) (Evaluation of selected methods of calculating the daily, monthly and annual mean air temperature (with the Western Sudety Mountains and their foreland as an example)). Opera Corcon 47(1):23–33 Urban G (2013) Evaluation of accuracy of selected methods of calculation of the daily mean air temperature depending on atmospheric circulation (the case study of the Western Sudety Mountains and their foreland). Opera Corcon 50/S:81–96 Urban G (2015) Zaleganie pokrywy śnieżnej i jego zmienność w polskiej części Sudetów i na ich przedpolu (Duration of snow cover and its variability in the Polish part of the Sudetes Mts. and their foreland). Przegl Geogr 87(3):497–516. https://doi.org/10.7163/PrzG.2015.3.5 (in Polish) Urban G (2016) Snow cover and its variability in the Polish Sudetes Mts. and the Sudetic Foreland. Geografie 121(1):32–53 Urban G, Richterová D (2010) Warunki śniegowe a uprawianie narciarstwa w Sudetach Zachodnich na polsko–czeskim pograniczu (Snow conditions and skiing in Western Sudety on Polish-Czech Republic border). Wiad Meteorol Hydrol Gospod Wod 1–4:3–28 (in Polish) Urban G, Richterová D, Vajskebr V (2011) Pokrywa śnieżna w październiku 2009 w Sudetach Zachodnich jako przykład zjawiska ekstremalnego (Snow cover in October 2009 in the Western Sudety Mountains as an example of extreme phenomenon). Wiad Meteorol Hydrol Gospod Wod 5(4):75–96 (in Polish) Urban G, Tomczyński K (2017) Air temperature trends at Mount Śnieżka (Polish Sudetes) and solar activity, 1881–2012. Acta Geogr Sloven 57-2:33–44. https://doi.org/10.3986/AGS.837 Ustrnul Z (1998) Variability of air temperature and circulation at selected stations in Europe. In Proceedings of the 2nd European Conference on Applied Climatology. Österreichische Beitra¨ge zu Meteorologie und Geophysik. Zentralanstalt für Meteorologie und Geodynamik, Vienna, 19; 81 (full text on ECAC CD-ROM, session 1) Wibig J, Głowicki B (2002) Trends of minimum and maximum temperature in Poland. Clim Res 20(2):122–133. https://doi.org/10.3354/cr020123 WMO (2011) Characterizing climate from datasets. In: Guide to Climatological Practices, World Meteorological Organization, Geneva, pp. 54–72 updated 19.01.2016 http://www.wmo.int/pages/prog/wcp/wcdmp/GCDS_1.php. Accessed 22 May 2017 Institute of Meteorology and Water Management – National Research Institute, Podleśna Street 01–673, 61, Warsaw, Poland Grzegorz Urban Czech Hydrometeorological Institute, branch office Ústí nad Labem, Kočkovská 18, 400 11, Ústí nad Labem, Czechia Dáša Richterová Czech Hydrometeorological Institute, branch office Hradec Králové, Dvorská 410, 503 11, Hradec Králové, Czechia Stanislava Kliegrová Czech Hydrometeorological Institute, branch office Praha, Na Šabatce 17, 4 − Komořany, 143 06, Praha, Czechia Ilona Zusková Institute of Geography and Regional Development, University of Wrocław, Uniwersytecki Pl. 1, 50–137, Wrocław, Poland Piotr Pawliczek Correspondence to Grzegorz Urban. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Urban, G., Richterová, D., Kliegrová, S. et al. Winter severity and snowiness and their multiannual variability in the Karkonosze Mountains and Jizera Mountains. Theor Appl Climatol 134, 221–240 (2018). https://doi.org/10.1007/s00704-017-2270-y Issue Date: October 2018 Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Late Holocene fire history and charcoal decay in subtropical dry forests of Puerto Rico Wei Huang1, Xianbin Liu1, Grizelle González2 & Xiaoming Zou ORCID: orcid.org/0000-0001-6154-82031,3 Fire is an important disturbance that influences species composition, community structure, and ecosystem function in forests. Disturbances such as hurricanes and landslides are critical determinants of community structure to Caribbean forests, but few studies have addressed the effect of paleofire disturbance on forests in Puerto Rico, USA. Soil charcoal is widely used to reconstruct fire history. However, the occurrence and frequency of paleofire can be underestimated due to charcoal decay. We reconstructed the fire history of subtropical dry forests of Puerto Rico based on the analysis of soil macrocharcoal numbers adjusted by the negative exponential decay function of charcoal. Twenty-one fire events occurred over the last 1300 yr in the subtropical dry forest of northeastern Puerto Rico, and 10 fire events occurred over the last 4900 yr in the subtropical dry forest of southeastern Puerto Rico. The average turnover time of charcoal in these subtropical dry forest soils of Puerto Rico was 1000 to 1250 yr. Soil charcoal decay leads to an underestimation of one to two undetected fire events during the Late Holocene in the subtropical dry forests of Puerto Rico. The peak of paleofire events for subtropical dry forests in northeastern and southeastern Puerto Rico was broadly similar, occurring between 500 to 1300 calibrated years before present (cal yr BP; before present is understood to mean before 1950 AD). Fire frequency of the subtropical dry forests in Puerto Rico decreased after the immigration of Europeans in the past 500 yr. The fire that occurred between 4822 and 4854 cal yr BP can be interpreted as either a natural fire or a new record of a native peoples settlement in southeastern Puerto Rico. Fire became a frequent disturbance in the subtropical dry forest of Puerto Rico after the development of cultigens by native peoples. Our data suggested that fire was a frequent disturbance and human activity was likely a dominant cause of these paleofires in the subtropical dry forests of Puerto Rico. El fuego es un disturbio importante que influencia la composición de especies, la estructura de la comunidad, y el funcionamiento de los ecosistemas de bosques. Disturbios como huracanes y deslizamiento de tierras son determinantes críticos de la estructura de bosques del Caribe, aunque pocos estudios han enfocado sobre los efectos de los disturbios de los paleo-fuegos en los bosques de Puerto Rico, EEUU. Estudios sobre carbón en los suelos son ampliamente usados para reconstruir la historia de fuego. Sin embargo, la ocurrencia y frecuencia de paleo-fuegos puede ser subestimada debido al decaimiento en el contenido de carbón. Reconstruimos la historia de fuego de los bosques secos subtropicales de Puerto Rico basados en el análisis del número de macro carbón en los suelos ajustado por una función exponencial negativa del decaimiento del carbón. Veintiún eventos de incendio ocurrieron en los últimos 1300 años en los bosques subtropicales secos del noreste de Puerto Rico, y diez eventos de incendio en los últimos 4900 años en el bosque subtropical seco del sureste de Puerto Rico. El tiempo de recurrencia promedio del carbón en esos bosques secos de Puerto Rico fue de 1000 a 1250 años. El decaimiento del carbón lleva a la subestimación de uno a dos eventos de fuego no detectados durante el Holoceno tardío en los bosques subtropicales secos de Puerto Rico. El pico de eventos de paleo-fuegos para los bosques secos en el noreste y sureste de Puerto Rico fue considerablemente similar, ocurriendo entre 500 y 1300 años calibrados antes de ahora (cal yr BP; antes de ahora significa antes de 1950 después de Cristo). La frecuencia de fuego de los bosques subtropicales secos de Puerto Rico decreció después de la inmigración de europeos en los últimos 500 años. El fuego ocurrido entre 4822 y 4854 cal yr BP puede ser interpretado tanto como un incendio natural o como un nuevo registro del establecimiento de pueblos nativos en el sureste de Puerto Rico. El fuego se transformó en un disturbio frecuente en los bosques subtropicales secos de Puerto Rico después del desarrollo de cultivos por parte de los pueblos nativos. Nuestros datos sugieren que el fuego fue un disturbio frecuente y que las actividades humanas fueron probablemente la causa dominante de estos paleo-fuegos en los bosques subtropicales secos de Puerto Rico. Fire is one of the main disturbance factors affecting species distribution, succession, and evolution in many forest ecosystems (Adámek et al. 2015, Bush et al. 2015). Fire influences the distribution and occurrence of plants by favoring those with high tolerance to fire and those capable of rapid colonization on the burned site. Factors influencing forest community such as soil type, topography, species composition, and time elapsed between fire disturbances are important in determining the scenarios of successional dynamics of forest regeneration after fire (Frégeau et al. 2015). Disturbance is regarded as a critical determinant of species composition and community structure in the forests of Puerto Rico (Waide and Lugo 1992). Research on disturbances in the forests of Puerto Rico has mostly concentrated on hurricanes and landslides (Guariguata 1990, Foster et al. 1999). Hurricane and fire disturbance share some cyclic characteristics in many tropical forests (Cochrane and Schulze 1999, Pascarella et al. 2004). The resprouting of surviving trees and the establishment and growth of seedlings and saplings of pioneers in open patches are the same important components of tropical forest recovery following both hurricane and fire disturbance (Zimmerman et al. 1994, Hjerpe et al. 2001). Frequent and possibly more intense natural fires occurred around 5200 calibrated years before present (cal yr BP, where 0 yr BP corresponds to 1950 AD) in a coastal region of Puerto Rico (Caffrey and Horn 2014, Rivera-Collazo et al. 2015). Regional climate issues in the pan-Caribbean area (e.g., hurricanes and the interannual changes in the position of the Intertropical Convergence Zone; Barry and Chorley 2010) were reported to induce paleofire in Dominica and Cuba (Crausbay et al. 2015, Peros et al. 2015). Synchrony of fire is a characteristic of climate-driven burning (Caffrey and Horn 2014). Thus, frequent paleofires might also have occurred in Puerto Rico during the Late Holocene. However, studies of paleofire in Puerto Rico are surprisingly rare. Soil macrocharcoal (>2 mm) is widely used to reconstruct fire history in situ and to compare regional fire histories because it is not transported over long distances (McMichael et al. 2012, Hubau et al. 2015). But random samples of soil charcoal do not guarantee that all fires are detected, because 1) some soil charcoal particles can be missed during random sampling of soil, and 2) charcoal may disappear or reduce in size due to its decay (Frégeau et al. 2015, Payette et al. 2017). Therefore, an increasing number of studies has constructed accumulation curves to estimate the actual number of local fires (Payette et al. 2012, Payette et al. 2017). However, few studies have considered a decrease in charcoal size over time caused by charcoal decay and the burning of charcoal during subsequent fires. Frégeau et al. (2015) used a negative exponential function to evaluate charcoal decay, but his estimation was based on the assumption that the numbers of charcoal fragments originating from fires were invariant over every 200-year period. This assumption is likely wrong, because charcoal abundance is subject to changes due to fire intensity and frequency (Tovar et al. 2014, Inoue et al. 2016), which can differ with climatic conditions. In order to address the gaps in paleofire knowledge in Puerto Rico, we conducted this study to: 1) reconstruct fire history in subtropical dry forests of Puerto Rico through radiocarbon dating of soil charcoal fragments; 2) estimate charcoal decay rates based on maximum charcoal sizes within each time interval; 3) estimate the number of missing fire events due to charcoal decay; and 4) infer the effect of paleoclimate on paleofire in the subtropical dry forests of Puerto Rico by pairing charcoal 13C discrimination (∆13C) with contemporary plant Δ13C values. The study area was located in eastern Puerto Rico, USA (Fig. 1). Nine sites were sampled and partitioned into two forest assemblages representing the subtropical dry forests in Puerto Rico. Location of the study area and sites in the subtropical dry forests of Puerto Rico, USA. a Map of Puerto Rico; b soil charcoal pits in northeastern dry forest; c soil charcoal pits in southeastern subtropical dry forest. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest The first forest assemblage was referred to as the northeastern subtropical dry forest (Ewel and Whitmore 1973) and included three sites: Ceiba I, Ceiba II, and Las Cabezas, between 18.22° and 18.38° N, and 65.60° and 65.67° W (Fig. 1), which were situated between sea level and up to nearly 100 m in elevation, with a 15.2° average slope inclination (Gould et al. 2006). The average annual temperature was 25.7 °C, and the average annual precipitation was 1416 mm. The soils were generally composed of Alfisol and Hapustalfs (Ping et al. 2013), had an average depth of 0.38 m and a pH of 6.51. The parent material of this soil is Colluvium and Andesitic residuum (Ping et al. 2013). Most abundant trees included Bucida buceras L., Guapira fragrans (Dum. Cours.) Little, Bourreria succulent Jacq., and Gymnanthes lucida Sw.; the shrub layer was dominated by Triphasia trifolia (Burm. f.) P. Wilson, Chamaesyce articulate (Burm.) Britton, Lantana camara L., and Argythamnia stahlii Urb.; and several species of lichens included Macfadyena unguis-cati (L.) A.H. Gentry, Tragia volubilis L., and Serjania polyphylla (L.) Radlk. (Gould et al. 2006). The second forest assemblage was referred to as the southeastern subtropical dry forest (Ewel and Whitmore 1973) and included six sites in the USDA Forest Service, Institute of Tropical Forestry's Guayama Research Area, between 18.04° and 18.05° N, and 66.16° and 66.17° W (Fig. 1). These sites were located between 270 m and up to nearly 640 m in elevation, with 28.2° average slope inclination. The average annual temperature was 22.72 °C, and the average annual precipitation was 1693.18 mm. The soils were generally composed of shallow Typic Haplustalfs (Muñoz et al. 2017), with an average depth of 1.1 m and soil pH of 5.32. The parent material of this soil is semiconsolidated volcanic rocket (USDA Soil Conservation Service 1977). The most abundant tree species in this subtropical dry forest included Bucida buceras, Casearia guianensis (Aubl.) Urb., Pictetia aculeate (Vahl) Urb., Nectandra coriacea (Sw.) Griseb., Andira inermis (W. Wright) Kunth ex DC., Guapira fragrans (Dum. Cours.) Little, Randia aculeata L., Zanthoxylum monophyllum (Lam.) P. Wilson, Eugenia foetida Pers., and Leucaena leucocephala (Lam.) de Wit. Soil sampling and charcoal extraction A square plot (10 m × 10 m) was positioned at each site. At each plot, a 20 cm × 20 cm area of surface organic matter was removed to expose the mineral soil. Soils were sampled from surface to parent material, at 20 cm intervals per sample layer, to maintain a fine vertical resolution of the extracted charcoal assemblages. In the laboratory, the mineral soils were suspended in 10% potassium hydroxide (KOH) solution for at least 24 h in order to disperse soil aggregates (Inoue et al. 2016). The soils were wet-sieved using superimposed sieves of 5 mm and 2 mm. The macrocharcoal fragments were extracted from the sieves, washed, and weighed. Wood litterfall collection and branch sampling Within each plot in the northeastern subtropical dry forest, we randomly installed three baskets of 0.25 m2 at 1 m above ground level. The wood collected in the baskets from each plot was combined into a single sample. Wood was collected every month from January to December 2015. Five dominant species (Andira inermis, Zanthoxylum monophyllum, Guapira fragrans, Casearia guianensis and Nectandra coriacea) and two other species (Ardisia obovata Desv. ex Ham. and Ficus citrifolia Mill.) in the southeastern subtropical dry forest were selected for wood sampling. Three plants per species were randomly chosen in the southeastern subtropical dry forest. From 10 to 20 Dec 2015, 10 first-year branches per plant were collected. Wood litterfall and branch samples were 65 °C oven dried, and ground through a 1 mm sieve. Before radiocarbon dating, charcoal samples were cleaned with 1M hydrochloric acid (HCl) and 1M sodium hydroxide (NaOH) to remove any adsorbed dissolved organic matter. All samples were dried prior to analysis. The radiocarbon ages of 20 charcoal samples from the northeastern subtropical dry forest were determined by AMS (Accelerator Mass Spectrometry) at the Earth System Science Department, University of California, Irvine, USA; and the radiocarbon ages of 58 charcoal samples from the southeastern subtropical dry forest were determined by AMS at the Lawrence Livermore National Laboratory, California, USA. The calibrated age of charcoal was obtained using the Calib 704 software (Queen's University Belfast, Belfast, Northern Ireland, United Kingdom). The determination of the calibrated age of each radiocarbon date was based on the weighted average of the highest probability distribution within the 2σ ranges of the starting and ending calendar dates. For each forest assemblage, all of the calibrated radiocarbon dates were pooled in a cumulative probability analysis using the sum probabilities option in Calib 704 to plot the probability that a given event occurred at a particular time to visualize the fire chronology on the Holocene temporal scale. All carbon-14 (14C) dates were presented in cal yr BP (Frégeau et al. 2015). Estimation of charcoal decay rate The decrease of charcoal weight with time was caused by charcoal decay and burning of charcoal during subsequent fires. For charcoal found in mineral soils below the surface layer, this decrease of charcoal weight over time was most likely due to charcoal decay, not by fire. Soil charcoal decay is a function of microbial activity wherein soil charcoal is colonized and consumed by soil microbial communities (Moskal-del Hoyo et al. 2010, Tilston et al. 2016). The decay curve of soil charcoal over time is best described as an exponential function. We proposed a novel approach to estimate the charcoal decay rate over time by assuming that the maximum initial size of charcoal that gets into mineral soil in each time interval remains invariant over a 1000-year period. Soil environment for charcoal deposition, such as pore size, drying-rewetting cycles, soil erosion, and burial rates, should be similar over a 1000-year period because soil development is extremely slow and most residential soils are aged for millions of years in the tropics (Birkeland et al. 1992). Thus, we have: $$ Y={Y}_0\ \left({e}^{-\mathrm{bx}}\right), $$ where Y corresponds to the maximum weight of charcoal at each age class, Y0 is the maximum weight of charcoal in mineral soil at age zero, b is the decay rate of charcoal (and its inverse value is the average turnover time of charcoal), and x is the calibrated age of charcoal. We used a time interval of 200 yr to identify charcoal with the maximum weight in each age class for the northeastern dry forest, and a time interval of 1000 yr for the southeastern dry forest, ensuring a minimum number of 5 age classes with charcoal presence. We counted the number of dated charcoal samples within each age class and selected two charcoal samples with maximum weight from each age class. We then obtained charcoal decay rate b using linear regression after natural logarithm transformation for both the northeastern dry forest (b1) and the southeastern dry forest (b2). Estimation of charcoal abundance The minimum detectable weight was 1.4 mg for analysis at the Earth System Science Department, University of California Irvine; and was 3.5 mg at the Lawrence Livermore National Laboratory. Thus, the number of charcoal particles as a function of time was likely underestimated because there were charcoal particles <1.4 mg that were ≥1.4 mg at their initial weight in the northeastern subtropical dry forest, and <3.5 mg that were ≥3.5 mg at their initial weight in the southeastern subtropical dry forest. To estimate real charcoal abundance as a function of time, we first employed Eq. (1) to estimate the initial weight of charcoal particles that were heavier than 1.4 mg at the time of sampling in the northeastern subtropical dry forest and heavier than 3.5 mg at the time of sampling in the southeastern subtropical dry forest. The initial weight of the charcoal pieces (Y01) that were older than 200 yr and heavier than 1.4 mg in the northeastern subtropical dry forest was estimated through the equation: $$ {Y}_{01}=\frac{1.4}{e^{-{\mathrm{b}}_1\left(\mathrm{x}-200\right)}}, $$ where b1 corresponds to the decay rate of charcoal in the northeastern dry forest, and is the dated charcoal age. The initial weight of the charcoal pieces (Y02) that were older than 1000 yr and heavier than 3.5 mg in the southeastern subtropical dry forest was calculated using the following equation: $$ {Y}_{02}=\frac{3.5}{e^{-{\mathrm{b}}_2\left(\mathrm{x}-1000\right)}}, $$ where b2 corresponds to the decay rate of charcoal in the southeastern dry forest, and x is the dated charcoal age. We then assumed that the abundance-size distribution of original charcoal (before decay occurs) in mineral soils in each age class remains invariant because depositional environments of soil charcoal are unlikely to change much over the course of soil development within multiple millennium years in a residential soil. Thus, the number of undetected charcoal particles (nud1) that were initially heavier than 1.4 mg and became lighter than 1.4 mg due to charcoal decay in the northeastern dry forest were estimated using the equation: $$ \frac{n_{\mathrm{ud}1}}{n_{\mathrm{d}1}}=\frac{n_{<{\mathrm{Y}}_{01}}}{n_{>{\mathrm{Y}}_{01}}}, $$ where nd1 corresponds to the number of charcoal particles that were heavier than 1.4 mg at the time of sampling in each age class >200 yr in the northeastern dry forest; n<Y01 is the number of charcoal particles that were lighter than Y01 but heavier than 1.4 mg at the time of sampling in the 200 yr age class; and n>Y01 is the number of charcoal particles that were heavier than Y01 at the time of sampling in the 200 yr age class. The sum of nud1 and nd1 is the number of corrected charcoal particles in each age class >200 yr in the northeastern dry forest. Similarly, the number of undetected charcoal particles (nud2) as a function of 1000 yr age interval in the southeastern subtropical dry forest was estimated using the equation: where nd2 corresponds to the number of charcoal particles that were heavier than 3.5 mg at the time of sampling in each age class >1000 yr in the southeastern dry forest; n<Y02 is the number of charcoal particles that were lighter than Y02 but heavier than 3.5 mg at the time of sampling in the 1000 yr age class; and n>Y02 is the number of charcoal particles that were heavier than Y02 at the time of sampling in the 1000 yr age class. The sum of nud2 and nd2 is the number of corrected charcoal particles in each age class >1000 yr in the southeastern dry forest. Reconstruction of paleofire history The random sampling of charcoal does not necessarily assure that all fires will be detected (Frégeau et al. 2015). Therefore, we used EstimateS 9 software (Colwell and Elsensohn 2014) to calculate the estimated fire events based on the observed or corrected charcoal particles. The number of randomizations was set to 100 in the Diversity Settings screen of EstimateS 9. This type of analysis has been used to determine an expected number of species in pooled samples, given the reference sample. The accumulation curves were created according to the relationship between the observed or corrected fire events and dated or corrected charcoal particles. When the curve forms an asymptote, it suggests that most of the fires that occurred at the site have been theoretically estimated. An index was produced based on a nonlinear regression of the mean number of fires detected in relation to the number of dated or corrected charcoal pieces using the following equation: $$ F(n)=F\left(\mathit{\max}\right)\ \left(1-{e}^{\mathrm{kn}}\right), $$ where F(n) corresponds to the number of fires observed or corrected, n is the number of charcoal pieces dated or corrected, F(max) is considered here as an estimator of the actual number of fires, and k is the constant controlling the shape of the curve (Fregeau et al. 2015). The F(max) index and the constant k were calculated using the equation of exponential regression in Sigmaplot 14.0 software (Systat Software Inc., San Jose, California, USA). The mean fire interval (I), that is, the average in calibrated years of all the fire intervals, was calculated for each site: $$ I=\frac{P}{n_{\mathrm{f}}-1}, $$ where P corresponds to the fire period defined here as the time elapsed between the youngest and oldest fires and nf is the number of fires. Stable carbon isotope analysis The ground samples of wood litterfall, live branches, and charcoal were sent to Michigan Technological University's Forest Ecology Stable Isotope Laboratory, Houghton, USA, for the analyses of carbon isotope composition (δ13C) values using a Costech Elemental Combustion System 4010 (Costech Analytical Technologies Inc., Valencia, California, USA) connected to a continuous flow isotope ratio mass spectrometer. δ13C values were reported in reference to the international Pee Dee belemnite standard (Slater et al. 2001). The Δ13C values of charcoal, wood litterfall, and live branches were calculated through the equation: $$ {\varDelta}^{13}C=\frac{\updelta^{13}{C}_{\mathrm{air}}-{\updelta}^{13}{C}_{\mathrm{plant}}}{1+\frac{\updelta^{13}{C}_{\mathrm{plant}}}{1000}}, $$ where δ13Cplant is the isotopic value of the wood litterfall, live branches, or charcoal; and δ13Cair is the isotopic value of the atmospheric CO2 in a specific time period corresponding to a smoothed δ13C curve of atmospheric carbon dioxide (CO2) from 16 100 BC to the present (available at http://web.udl.es/usuaris/x3845331/AIRCO2_LOESS.xls). There are two stable carbon isotopes in the air: 12C (carbon-12) and 13C. During photosynthesis, plants preferentially take in 12C instead of 13C (i.e., discrimination of the heavy isotope in favor of the lighter one; Fiorentino et al. 2014). In C3 plants under optimal conditions, the stomata are fully open and the flow of CO2 inside the intercellular spaces of the leaf is not limited, leading to discrimination, and thus low δ13C and high ∆13C (Fiorentino et al. 2014). Under environmental stress (e.g., drought), plants typically defend against water stress through stomatal closure, increasing water use efficiency and δ13C, consequently decreasing Δ13C in C3 plants (Fiorentino et al. 2014). This is the basis for the extensively reported relationships between plant Δ13C and environmental variables. In many environmental studies, it is assumed that carbon isotope ratios derived from naturally occurring and anthropogenic charcoal are a direct representation of the isotopic values of the wood tissues from which they were formed, and hence a record of environmental and climatic signals (Hall et al. 2008). C4 plants are not robust enough to be easily applicable to archaeobotanical remains (Tieszen and Fagre 1993). So, for the northeastern subtropical dry forest, the annual mean ∆13C of wood litterfall of 2015 was compared with charcoal ∆13C to infer paleoclimate. For the southeastern subtropical dry forest, the ∆13C of the first-year live branches of 2015 was compared with charcoal ∆13C to deduce paleoclimate. The year 2015 was an extreme drought year in Puerto Rico (Mote et al. 2017). Charcoal abundance and decay In the northeastern subtropical dry forest, a total of 31 (mean of 10.33 per site) charcoal fragments were recovered (Table 1). Among all charcoal samples collected, 87.10% of all charcoal fragments were >1.4 mg. Around 80% of charcoal samples were recovered from 0 to 20 cm deep in the soil (Table 2). The oldest charcoal sample was dated at 1221 to 1256 cal yr BP; the youngest charcoal sample was dated at 70 to 117 cal yr BP, but no charcoal was dated between 307 and 558 cal yr BP (Fig. 2, Additional file 1). The number of charcoal fragments increased progressively towards present time. Eighty percent of the charcoal fragments were younger than 1000 cal yr BP (Fig. 2, Additional file 1). Table 1 Parameters of fire histories in subtropical dry forest of Puerto Rico, USA. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest. The radiocarbon ages of 20 charcoal samples from the northeastern subtropical dry forest were determined by AMS (Accelerator Mass Spectrometry) at the Earth System Science Department, University of California, Irvine; and the radiocarbon ages of 58 charcoal samples from the southeastern subtropical dry forest were determined by AMS at the Lawrence Livermore National Lab Table 2 Depth distribution of charcoal in subtropical dry forest of Puerto Rico, USA, along the soil profile. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest Distribution of the cumulated probability of calibrated 14C dates of charcoal in (a) northeastern and (b) southeastern subtropical dry forests in Puerto Rico, USA. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest. A cross (+) indicates the occurrence of a fire event. The radiocarbon ages of 20 charcoal samples from the northeastern subtropical dry forest were determined by AMS (Accelerator Mass Spectrometry) in August and November 2016 at the Earth System Science Department, University of California, Irvine, USA; and the radiocarbon ages of 58 charcoal samples from the southeastern subtropical dry forest were determined by AMS in July and August 2015 at the Lawrence Livermore National Laboratory, California, USA. The calibrated age of charcoal was obtained using the Calib 704 software. For each forest assemblage, all of the calibrated radiocarbon dates were pooled in a cumulative probability analysis using the sum probabilities option in Calib 704 to plot the probability that a given event occurred at a particular time to visualize the fire chronology on the Holocene temporal scale In the southeastern subtropical dry forest, a total of 1734 (mean of 289 per site) charcoal fragments were recovered (Table 1). The majority of charcoal fragments were found in the upper 40 cm of soil, especially in the 20 to 40 cm depth (Table 2). In all, 95.80% of all charcoal fragments were >3.5 mg. The oldest charcoal was dated at 4806 to 4867 cal yr BP, and the youngest charcoal dated at 32 to 83 cal yr BP, but no charcoal was dated between 2762 and 4807 cal yr BP or between 1387 and 2359 cal yr BP (Fig. 2, Additional file 1). The number of charcoal fragments increased progressively towards present time. In all, 62.07% of the charcoal fragments were younger than 1000 cal yr BP (Fig. 2, Additional file 1). The mean decay rate of charcoal was 0.0010 yr−1 in the northeastern subtropical dry forest, and 0.0008 yr−1 in the southeastern subtropical dry forest (Fig. 3). The mean turnover times of charcoal in the northeastern and southeastern subtropical dry forest were 1000 and 1250 yr, respectively. In the northeastern subtropical dry forest, there were three unaccounted charcoal samples that were lighter than 1.4 mg at the time of sampling but were heavier than 1.4 mg at the time of deposition during 1000 to 1200 cal yr BP, and one unaccounted charcoal sample during 1200 to 1400 cal yr BP. In the southeastern subtropical dry forest, there were six unaccounted charcoal samples that were lighter than 3.5 mg at the time of sampling but were heavier than 3.5 mg at the time of deposition between 2000 and 3000 cal yr BP, five unaccounted charcoal samples during 3000 to 4000 cal yr BP, and 10 unaccounted charcoal samples in 4000 to 5000 cal yr BP (Fig. 4). The weight loss of the heaviest two charcoal samples as a function of (a) 200-year age classes in the northeastern subtropical dry forest, and (b) 1000-year age classes in the southeastern subtropical dry forest in Puerto Rico, USA. Diamond (♦) indicates the weight of the heaviest charcoal sample (two in each age class); the line is the decay curve of soil charcoal over time. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest Number of dated and corrected charcoal fragments as a function of (a) 200-year age classes in the northeastern subtropical dry forest and (b) 1000-year age classes in the southeastern subtropical dry forest in Puerto Rico, USA. The radiocarbon ages of 20 charcoal samples from the northeastern subtropical dry forest were determined by AMS (Accelerator MassSpectrometry) in August and November 2016 at the Earth System Science Department, University of California, Irvine, USA; and the radiocarbon ages of 58 charcoal samples from the southeastern subtropical dry forest were determined by AMS in July and August 2015 at the Lawrence Livermore National Laboratory, California, USA Paleofire history Before correction for charcoal decay, the dated charcoal revealed that the fire regime of the northeastern subtropical forest corresponded to 17 detected fires and 19 estimated fires over the last 1300 yr (Fig. 5a, Table 1). In all, 76% detected fires (13 fires) in the northeastern subtropical dry forest occurred between 559 and 1261 cal yr BP, with fire interval averaged 59 yr during this period (Fig. 2a). There was a fire-free interval between 306 and 558 cal yr BP in the northeastern subtropical dry forest (Fig. 2a). In the past 300 yr, there were fewer fires (4 fires) detected in the northeastern subtropical dry forest, with a longer fire interval (103 yr, Fig. 2a). Accumulated curves of the (a and c) observed and (b and d) corrected number of fire events based on dated 14C charcoal fragments before and after correction for charcoal decay, respectively, in the (a and b) northeastern and (c and d) southeastern subtropical dry forest of Puerto Rico, USA. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest. Dots (•) indicate the mean number of fire events for the corresponding number of charcoal particles; horizontal bars indicate the 95% variation range of fire events for the corresponding number of charcoal particles; and lines indicate accumulated curves. The diversity function of EstimateS 9 was used to calculate the observed or estimated fire events in dated or estimated charcoal particles. The F(max) index and the constant k were calculated using the equation of exponential regression in Sigmaplot 14.0 Over the last 5000 yr, fewer fires were recorded in the southeastern than in the northeastern subtropical dry forest. Before correction for charcoal decay, the dated charcoal revealed that the fire regime of the southeastern subtropical forest corresponded to nine detected fires and nine estimated fires (Fig. 5c, Table 1). Only one fire occurred between 4822 and 4852 cal yr BP and between 2717 and 2753 cal yr BP (Fig. 2b). Similar to the northeastern subtropical dry forest, 66.66% of the detected fires (6 fires) in the southeastern subtropical dry forest occurred between 539 and 1358 cal yr BP, and the fire interval of this period was 164 yr (Fig. 2b). In the last 521 yr, there was only one fire detected in the southeastern subtropical dry forest (Fig. 2b). With the addition of four undetected charcoal samples after corrections for charcoal decay, the estimated fire events increased from 19 to 21 over the past 1300 yr in the northeastern subtropical dry forest (Fig. 5b, Table 1). The additional two fire events should have occurred between 1000 and 1400 cal yr BP, which led to the decrease in the fire interval between 559 and 1261 cal yr BP from 59 yr to 47 yr in the northeastern subtropical dry forest. With the addition of 21 undetected charcoal samples after corrections for charcoal decay, the estimated fire events increased from 9 to 10 in the southeastern subtropical dry forest (Fig. 5d, Table 1). This one additional fire event should have occurred between 1000 and 5000 cal yr BP, which did not change the fire interval during 539 to 1358 cal yr BP in the southeastern subtropical dry forest. Stable carbon isotope in plant wood tissues and charcoal We did not separate taxa attribution for the charcoal in this analysis. The ∆13C value of charcoal from the northeastern subtropical dry forest ranged from 17.36% to 22.28% and showed a decreasing trend over last 1300 yr (Fig. 6a). The ∆13C value of wood deposited in the baskets from 2015 fell right near the middle of charcoal ∆13C range. The △13C value of charcoal from the southeastern subtropical dry forest varied from 17.70% to 26.35%, and can be divided to two subsequent phases (Fig. 6b). From 800 to 5000 cal yr BP, 95% of charcoal ∆13C values of the southeastern subtropical dry forest fell within the ∆13C values of live branches grown in 2015 (21.42 to 23.58%). In the recent 800 yr, only 57% of charcoal △13C values were lower than ∆13C values of branches in 2015, and the other ∆13C values were greater than ∆13C of branches in 2015 (Fig. 6b). Time-series of 13C discrimination (∆13C) in the (a) northeastern and (b) southeastern subtropical dry forest of Puerto Rico, USA. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest. The black dotted lines represent average ∆13C values of (a) wood litter fall in the northeastern subtropical dry forest during the 2015 drought year, and (b) branches collected in Dec 2015 in the southeastern subtropical dry forest. The gray dotted lines represent average values ± SD of ∆13C of branches collected in Dec 2015 in the southeastern subtropical dry forest. Diamonds (♦) represent the time series of ∆13C. δ13C values were analyzed at Michigan Technological University in Aug and Dec 2016, and May 2017 The radiocarbon age of a charcoal fragment corresponds to the time when the wood that comprises charcoal was actually produced and not to the actual age of a fire event (de Lafontaine and Payette 2011). Therefore radiocarbon age may be several centuries older than the actual date of the fire that produced charcoal in most forests (de Lafontaine and Payette 2011). This is "inbuilt age error." The value of inbuilt age error depends on forest stand age structure and rate of wood decay (Gavin 2001, Gavin et al. 2003) and by the prevailing fire regime itself (Higuera et al. 2005). In sites experiencing more frequent fires with short-lived trees and fast decaying wood, the radiocarbon dates of charcoal were regarded as a proxies for actual fire ages (de Lafontaine and Payette 2011). In the northeastern and southeastern subtropical dry forest of Puerto Rico, estimates of mean fire interval were all between 63 and 607 yr. Data for tree lifespan in tropical dry forests are broadly lacking. Tree lifespan in some tropical rain forests is between 70 and 138 yr (Swaine et al. 1987). Wood decay turnover time is around 2 to 22 yr in Puerto Rican wet and dry forests (Torres and González 2005, González et al. 2008). Our estimated charcoal decay rates were 0.0010 yr−1 in the northeastern dry forest and 0.0008 yr−1 in the southeastern dry forest of Puerto Rico, corresponding to 1000 and 1250 yr of residence time, respectively. The residence time of soil pyrogenic carbon appeared to vary regionally (Ohlson et al. 2009). The average lifespan of charcoal in the subtropical dry forest soils of Puerto Rico is longer than in boreal forest soils (652 yr; Ohlson et al. 2009) and in Russian steppe soils (182 to 541 yr; Hammes et al. 2008), but shorter than in Australian savannah soils (1300 to 2600 yr; Lehmann et al. 2008). We employed a novel approach to estimate charcoal decay rate by using maximum charcoal size in each age class over time, based on the negative exponential decay function typically found in microbial activity with the assumption that the initial maximum size of charcoal from each age class is relatively invariant over time. Our approach is more rational than Frégeau et al.'s (2015) method of estimating charcoal decay rate using charcoal abundance in each age class over time, which assumed that the numbers of initial charcoal fragments originating from fires were the same for each 200 yr age class. It is well understood that charcoal abundance differs among age classes, not only because charcoal decays, but also due to the fact that fire frequency and intensity vary over time as climate conditions change (Frégeau et al. 2015). Variation of fire frequency and intensity was exactly the purpose of various studies reconstructing paleoclimate and documenting human disturbance (Caffrey and Horn 2014). Our data suggest that charcoal decays at a relatively constant rate in the northeastern and southeastern dry forests of Puerto Rico regardless of the substantial difference in their annual mean temperature and precipitation. Our assumption that an invariant maximum size of initial charcoal over age classes relies on the understanding that soil development is extremely slow and most residential soils are aged for millions of years in the tropics (Birkeland et al. 1992); thus, naturally depositional environments, such as soil pore size, drying-rewetting cycles, and erosion and burial processes, are likely little changed for each time interval of 200 to 1000 yr over 5000 years. However, anthropogenic activities may alter naturally depositional environment, thus affecting the estimation of charcoal decay rate using our approach. Charcoal decay may lead to an underestimation of paleofire events. In this study, we attempted to correct this underestimation by reconstructing initial charcoal sizes with charcoal decay rate, and assumed that the abundance-size distribution of undecayed charcoal did not vary among charcoal age classes. This approach yielded 2 and 1 additional undetected fire events for the northeastern and southeastern dry forests of Puerto Rico, respectively. Similarly, our assumption that invariant undecayed charcoal abundance-size distribution relies on the fact that it takes millions of years for residential soils to develop, and that the depositional environment of soil charcoal varies little over a 1000-year period. Again, soil disturbances and species invasion can violate this assumption. Our detected and estimated paleofire events were 17 and 21 for the northeastern dry forest, and 9 and 10 for the southeastern dry forest of Puerto Rico, respectively. Both the northeastern and southeastern subtropical dry forests of Puerto Rico showed a noticeable peak of fire activities between 500 and 1400 cal yr BP, suggesting either a dry climate or increased human activity. The paleofires might be ascribed to slash-and-burn agriculture by pre-Columbus native peoples. The beginning of the development of cultigens was around 2600 cal yr BP in Cuba (Peros et al. 2015) and probably followed shortly thereafter in Puerto Rico. Humans settled around Laguna Grande, near the northeastern subtropical dry forest of Puerto Rico, in ~2000 cal yr BP, and this settlement might have lead to deforestation by slash-and-burn agriculture in the majority of the watersheds (Lane et al. 2013). On the other hand, the frequent fires around 800 to 1110 cal yr BP in the subtropical dry forests of Puerto Rico were similar to the intense fires that occurred after hurricanes around 800 to 1000 cal yr BP on the Gulf of Mexico coast (Liu et al. 2008), in Costa Rica (Horn and Sanford 1992), and in Laguna Alejandro of Dominican Republic (LeBlanc et al. 2017). On occasion, Puerto Rico, Dominican Republic, and the Gulf of Mexico coast have been on the same path of the same hurricanes (e.g., Hurricane Hugo in 1989, Hurricane Georges in 1998, and Hurricanes Irma and Maria in 2017). Thus, the frequent fires around 800 to 1110 cal yr BP in the subtropical dry forest of Puerto Rico might also be explained by inferred hurricanes, which directly struck the Gulf of Mexico coast, Costa Rica, and Laguna Alejandro of Dominican around 800 to 1110 cal yr BP. Because canopies open after hurricanes, insolation and wind speed increase under the canopies, leading to drier microclimates with drier litter on the forest floor post hurricane (Myers and van Lear 1998). The lower ∆13C value of charcoal fragments of 800 to 1100 cal yr BP from the subtropical dry forest suggested a drier microclimate in subtropical dry forest during this period. One other characteristic of paleofire patterns shared in both the northeastern and southeastern subtropical dry forest of Puerto Rico is that fire frequency decreased after the immigration of Europeans in the last 500 yr. When strong trends in biomass burning were inconsistent with climate trends, human activity became clearly evident (Marlon et al. 2013). Humans might have influenced the fire regime by burning during the wetter seasons for agricultural uses, resulting in more controlled, lower intensity fires (Burney 1997). The three fires of the subtropical forests of Puerto Rico that produced higher △13C values of charcoal fragments during 400 to 180 cal yr BP (Little Ice Age) might be ascribed to human ignitions. Thus, the apparent peak of fire activities between 500 and 1400 cal yr BP might be attributed to the increased human activity or increased drought stress after hurricanes. Charcoal older than 1300 cal yr BP was not found in the northeastern subtropical dry forest of Puerto Rico, while two charcoal samples dated 2500 cal yr BP and one dated around 5000 cal yr BP were detected in the southeastern subtropical dry forest of Puerto Rico. This does not necessarily indicate that fires did not occur before 1300 cal yr BP in the northeastern subtropical dry forest. Instead, it suggests that charcoal older than 1300 yr was decayed below our minimum detectable weight of 1.4 mg in the northeastern subtropical dry forest. Because the average turnover time of charcoal in these two subtropical dry forests of Puerto Rico is similar, between 1000 to 1250 yr, these three charcoal particles older than 2500 yr might have broken off of bigger charcoal particles that were in the soil after the paleofire in the southeastern dry forest. In fact, the largest charcoal sample weighed 178.1 mg (14C age = 190 ±30 yr), from the southeastern dry forest, which was more than 6-fold more than the 27.4 mg (14C age = 235 ±20 yr) sample from the northeastern dry forest. The time period (4822 to 4854 cal yr BP) for the oldest fire at our study site is in line with the ~5000 cal yr BP paleofire at Laguna Tortuguero, Puerto Rico, which was regarded as evidence of the onset of human disturbance on the landscape. Thus, the 4822 to 4854 cal yr BP fire in the southeastern subtropical dry forest of Puerto Rico was most likely evidence of the settlement of native peoples that predates archeological evidence (Burney et al. 1994), although it could also have been a natural fire. Neotropical forests rarely ignite from natural causes (Kauffman and Uhl 1990) because of the high silica content of the leaves and leaf litter (Ter Welle 1976, Mak 1988). The probability of natural fire in neotropical forest can increase, of course, when invasive plants colonize the understory (Brooks et al. 2004), or when human populations deliberately cut and burn theses forests (Bush et al. 2008). Puerto Rico is thought to have been initially occupied by native peoples about 4713 cal yr BP, originating from Maruca (17.99° N, 66.62° W; Rivera-Collazo et al. 2015). Our finding of the 4822 to 4854 cal yr BP fire in the southeastern subtropical dry forest of Puerto Rico may record an indication of an earlier colonization date for Puerto Rico. The decay rate of soil charcoal in the subtropical dry forests of Puerto Rico was 0.0008 to 0.0010 yr−1. We estimate that one to two fire events were undetected due to charcoal decay in the subtropical forests of Puerto Rico in the Late Holocene. Our soil macrocharcoal analysis revealed that 21 fire events occurred over the last 1300 yr in the northeastern subtropical dry forest of Puerto Rico, and 10 fire events occurred over the last 4900 yr in the southeastern subtropical dry forest of Puerto Rico. The 4822 to 4854 cal yr BP fire in the southeastern subtropical dry forest of Puerto Rico could have been a natural fire or, more likely, was an indication of the initial occupation of native peoples to this island. Peak fire events occurred during 500 to 1400 cal yr BP in the subtropical dry forest of Puerto Rico. The paleofire peak of the subtropical dry forest of Puerto Rico may be ascribed to agricultural activities of pre-Columbus dwellers and inferred hurricanes that directly struck the Gulf of Mexico coast, Costa Rica, and Laguna Alejandro of Dominican around 800 to 1110 cal yr BP. Fire frequency of the subtropical dry forest in Puerto Rico decreased after the immigration of Europeans in the last 500 years. Future studies should examine the temporal change in paleo-vegetation and anthropology to improve the understanding of the causes of paleofire, and to evaluate fire-mediated changes in vegetation and climate in the subtropical dry forests of Puerto Rico. Adámek, M., P. Bobek, V. Hadincová, J. Wild, and M. Kopecký. 2015. Forest fires within a temperate landscape: a decadal and millennial perspective from a sandstone region in central Europe. Forest Ecology and Management 336: 81–90. https://doi.org/10.1016/j.foreco.2014.10.014 Barry, R.G., and R.J. Chorley. 2010. Atmosphere, weather and climate. Routledge, London, England, United Kingdom. https://doi.org/10.4324/9780203871027 Birkeland, P., I. Martini, and W. Chesworth. 1992. Quaternary soil chronosequences in various environments—extremely arid to humid tropical. Pages 261-281 in: I.P. Martini and W. Chesworth, editors. Weathering, soils and paleosols. Elsevier, Amsterdam, The Netherlands. https://doi.org/10.1016/B978-0-444-89198-3.50016-7 Brooks, M.L., C.M. D'Antonio, D.M. Richardson, J.B. Grace, J.E. Keeley, J.M. DiTomaso, R.J. Hobbs, M. Pellant, and D. Pyke. 2004. Effects of invasive alien plants on fire regimes. BioScience 54: 677–688. https://academic.oup.com/bioscience/article/54/7/677/223532 Burney, D.A. 1997. Tropical islands as paleoecological laboratories: gauging the consequences of human arrival. Human Ecology 25: 437–457. https://doi.org/10.1023/A:1021823610090 Burney, D.A., L.P. Burney, and R.D.E. MacPhee. 1994. Holocene charcoal stratigraphy from Laguna Tortuguero, Puerto Rico, and the timing of human arrival on the island. Journal of Archaeological Science 21: 273–281. https://doi.org/10.1006/jasc.1994.1027 Bush, M.B., A.M. Alfonso-Reynolds, D.H. Urrego, B.G. Valencia, Y.A. Correa-Metrio, M. Zimmerman, M.R. Silman, and J.C. Svenning. 2015. Fire and climate: contrasting pressures on tropical Andean timberline species. Journal of Biogeography 42: 938–950. https://doi.org/10.1111/jbi.12470 Bush, M.B., M. Silman, C. McMichael, and S. Saatchi. 2008. Fire, climate change and biodiversity in Amazonia: a Late-Holocene perspective. Philosophical Transactions of the Royal Society B 363: 1795–1802. https://doi.org/10.1098/rstb.2007.0014 Caffrey, M.A., and S.P. Horn. 2014. Long-term fire trends in Hispaniola and Puerto Rico from sedimentary charcoal: a comparison of three records. The Professional Geographer 67: 229–241. https://doi.org/10.1080/00330124.2014.922017 Cochrane, M.A., and M.D. Schulze. 1999. Fire as a recurrent event in tropical forests of the eastern Amazon: effects on forest structure, biomass, and species composition. Biotropica 31: 2–16. https://doi.org/10.2307/2663955 Colwell, R.K., and J.E. Elsensohn. 2014. EstimateS turns 20: statistical estimation of species richness and shared species from samples, with non-parametric extrapolation. Ecography 37: 609–613. Crausbay, S.D., P.H. Martin, and E.F. Kelly. 2015. Tropical montane vegetation dynamics near the upper cloud belt strongly associated with a shifting ITCZ and fire. Journal of Ecology 103: 891–903. https://doi.org/10.1111/1365-2745.12423 de Lafontaine, G., and S. Payette. 2011. Long-term fire and forest history of subalpine balsam fir (Abies balsamea) and white spruce (Picea glauca) stands in eastern Canada inferred from soil charcoal analysis. Holocene 22: 191–201. https://doi.org/10.1177/0959683611414931 Ewel, J.J., and J.L. Whitmore. 1973. The ecological life zones of Puerto Rico and the US Virgin Islands. https://www.fs.usda.gov/treesearch/pubs/5551 Accessed 15 Nov 2015. Fiorentino, G., J.P. Ferrio, A. Bogaard, J.L. Araus, and S. Riehl. 2014. Stable isotopes in archaeobotanical research. Vegetation History and Archaeobotany 24: 215–227. https://doi.org/10.1007/s00334-014-0492-9 Foster, D.R., M. Fluet, and E. Boose. 1999. Human or natural disturbance: landscape-scale dynamics of the tropical forests of Puerto Rico. Ecological Applications 9: 555–572. https://doi.org/10.2307/2641144 Frégeau, M., S. Payette, and P. Grondin. 2015. Fire history of the central boreal forest in eastern North America reveals stability since the mid-Holocene. Holocene 25: 1912–1922. https://doi.org/10.1177/0959683615591361 Gavin, D.G. 2001. Estimation of inbuilt age in radiocarbon ages of soil charcoal for fire history studies. Radiocarbon 43: 27–44. https://doi.org/10.1017/S003382220003160X Gavin, D.G., L.B. Brubaker, and K.P. Lertzman. 2003. Holocene fire history of a coastal temperate rain forest based on soil charcoal radiocarbon dates. Ecology 84: 186–201. https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1890/0012-9658(2003)084%5B0186:HFHOAC%5D2.0.CO%3B2 González, G., W.A. Gould, A.T. Hudak, and T.N. Hollingsworth. 2008. Decay of aspen (Populus tremuloides Michx.) wood in moist and dry boreal, temperate, and tropical forest fragments. AMBIO: A Journal of the Human Enviornment 37: 588–597. https://doi.org/10.1579/0044-7447-37.7.588 Gould, W., G. González, and R.G. Carrero. 2006. Structure and composition of vegetation along an elevational gradient in Puerto Rico. Journal of Vegetation Science 17: 653–664. https://doi.org/10.1111/j.1654-1103.2006.tb02489.x Guariguata, M.R. 1990. Landslide disturbance and forest regeneration in the upper Luquillo Mountains of Puerto Rico. Journal of Ecology 78: 814–832. https://doi.org/10.2307/2260901 Hall, G., S. Woodborne, and M. Scholes. 2008. Stable carbon isotope ratios from archaeological charcoal as palaeoenvironmental indicators. Chemical Geology 247: 384–400. https://doi.org/10.1016/j.chemgeo.2007.11.001 Hammes, K., M.S. Torn, A.G. Lapenas, and M.W. Schmidt. 2008. Centennial black carbon turnover observed in a Russian steppe soil. Biogeosciences 5: 1339–1350. https://doi.org/10.5194/bg-5-1339-2008 Higuera, P.E., D.G. Sprugel, and L.B. Brubaker. 2005. Reconstructing fire regimes with charcoal from small-hollow sediments: a calibration with tree-ring records of fire. Holocene 15: 238–251. https://doi.org/10.1191/0959683605hl789rp Hjerpe, J., H. Hedenås, and T. Elmqvist. 2001. Tropical rain forest recovery from cyclone damage and fire in Samoa. Biotropica 33: 249–259. https://doi.org/10.1111/j.1744-7429.2001.tb00176.x Horn, S.P., and R.L. Sanford Jr. 1992. Holocene fires in Costa Rica. Biotropica 24: 354–361. https://doi.org/10.2307/2388605 Hubau, W., J. Van den Bulcke, J. Van Acker, and H. Beeckman. 2015. Charcoal-inferred Holocene fire and vegetation history linked to drought periods in the Democratic Republic of Congo. Global Change Biology 21: 2296–2308. https://doi.org/10.1111/gcb.12844 Inoue, J., R. Okunaka, and T. Kawano. 2016. The relationship between past vegetation type and fire frequency in western Japan inferred from phytolith and charcoal records in cumulative soils. Quaternary International 397: 513–522. https://doi.org/10.1016/j.quaint.2015.02.039 Kauffman, J., and C. Uhl. 1990. Interactions of anthropogenic activities, fire, and rain forests in the Amazon Basin. Pages 117-134 in: J.G. Goldammer, editor. Fire in the tropical biota. Springer, Berlin, Heidelberg, Germany. https://doi.org/10.1007/978-3-642-75395-4_8 Lane, C.S., J.J. Clark, A. Knudsen, and J. McFarlin. 2013. Late-Holocene paleoenvironmental history of bioluminescent Laguna Grande, Puerto Rico. Palaeogeography, Palaeoclimatology, Paleoecology 369: 99–113. https://doi.org/10.1016/j.palaeo.2012.10.007 LeBlanc, A.R., L.M. Kennedy, K. Liu, and C.S. Lane. 2017. Linking hurricane landfalls, precipitation variability, fires, and vegetation response over the past millennium from analysis of coastal lagoon sediments, southwestern Dominican Republic. Journal of Paleolimnology 58: 135–150. https://doi.org/10.1007/s10933-017-9965-z Lehmann, J., J. Skjemstad, S. Sohi, J. Carter, M. Barson, P. Falloon, K. Coleman, P. Woodbury, and E. Krull. 2008. Australian climate–carbon cycle feedback reduced by soil black carbon. Nature Geoscience 1: 832–835. https://doi.org/10.1038/ngeo358 Liu, K., H. Lu, and C. Shen. 2008. A 1200-year proxy record of hurricanes and fires from the Gulf of Mexico coast: testing the hypothesis of hurricane–fire interactions. Quaternary Research 69: 29–41. https://doi.org/10.1016/j.yqres.2007.10.011 Mak, E.H. 1988. Measuring foliar flammability with the limiting oxygen index method. Forest Science 34: 523–529. Marlon, J.R., P.J. Bartlein, A.L. Daniau, S.P. Harrsion, S.Y. Maezumi, M.J. Power, W. Tinner, and B. Vanniere. 2013. Global biomass burning: a synthesis and review of Holocene paleofire records and their controls. Quaternary Science Reviews 65: 5–25. https://doi.org/10.1016/j.quascirev.2012.11.029 McMichael, C., A. Correa-Metrio, and M. Bush. 2012. Pre-Columbian fire regimes in lowland tropical rainforests of southeastern Peru. Palaeogeography, Palaeoclimatology, Paleoecology 342: 73–83. https://doi.org/10.1016/j.palaeo.2012.05.004 Moskal-del Hoyo, M., M. Wachowiak, and R. Blanchette. 2010. Preservation of fungi in archaeological charcoal. Journal of Archaeological Science 37: 2106–2116. https://doi.org/10.1016/j.jas.2010.02.007 Mote, T.L., C.A. Ramseyer, and P.W. Miller. 2017. The Saharan Air Layer as an early rainfall season suppressant in the eastern Caribbean: the 2015 Puerto Rico drought. Journal of Geophysical Research, Atmospheres 122: 10,966–10,982. https://doi.org/10.1002/2017JD026911 Muñoz, M.A., W.I. Lugo, C. Santiago, M. Matos, S. Ríos, and J. Lugo. 2017. Taxonomic classification of the soils of Puerto Rico. https://dire.uprm.edu/handle/20.500.11801/817 Accessed 23 Jan 2018. Myers, R.K., and D.H. van Lear. 1998. Hurricane-fire interactions in coastal forests of the South: a review and hypothesis. Forest Ecology and Management 103: 265-276. https://doi.org/10.1016/S0378-1127(97)00223-5 Ohlson, M., B. Dahlberg, T. Økland, K.J. Brown, and R. Halvorsen. 2009. The charcoal carbon pool in boreal forest soils. Nature Geoscience 2: 692–695. https://doi.org/10.1038/ngeo617 Pascarella, J.B., A.T. Mitchell, and J.K. Zimmerman. 2004. Short-term response of secondary forests to hurricane disturbance in Puerto Rico, USA. Forest Ecology and Management 199: 379–393. https://doi.org/10.1016/j.foreco.2004.05.041 Payette, S., A. Delwaide, A. Schaffhauser, and G. Magnan. 2012. Calculating long-term fire frequency at the stand scale from charcoal data. Ecosphere 3(7): 59. https://doi.org/10.1890/ES12-00026.1 Payette, S., V. Pilon, P.L. Couillard, and J. Laflamme. 2017. Fire history of Appalachian forests of the Lower St-Lawrence region (southern Quebec). Forests 8: 120. https://doi.org/10.3390/f8040120 Peros, M., B. Gregory, F. Matos, E. Reinhardt, and J. Desloges. 2015. Late-Holocene record of lagoon evolution, climate change, and hurricane activity from southeastern Cuba. Holocene 25: 1483–1497. https://doi.org/10.1177/0959683615585844 Ping, C.L., G.J. Michaelson, C.A. Stiles, and G. González. 2013. Soil characteristics, carbon stores, and nutrient distribution in eight forest types along an elevation gradient, eastern Puerto Rico. Ecological Bulletins 54: 67–86. Rivera-Collazo, I., A. Winter, D. Scholz, A. Mangini, T. Miller, Y. Kushnir, and D. Black. 2015. Human adaptation strategies to abrupt climate change in Puerto Rico ca. 3.5 ka. The Holocene 25: 627–640. https://doi.org/10.1177/0959683614565951 Slater, C., T. Preston, and L.T. Weaver. 2001. Stable isotopes and the international system of units. Rapid Communications in Mass Spectrometry 15: 1270–1273. https://doi.org/10.1002/rcm.328 Swaine, M., D. Lieberman, and F.E. Putz. 1987. The dynamics of tree populations in tropical forest: a review. Journal of Tropical Ecology 3: 359–366. Ter Welle, B.J.H. 1976. Silica grains in woody plants of the Neotropics, especially Surinam. Pages 107-142 in: P. Baas, A.J. Bolton, and D.M. Catling, editors. Wood structure in biological and technological research. Leiden botanical series 3. Leiden University Press, Leiden, The Netherlands. Tieszen, L.L., and T. Fagre. 1993. Carbon isotopic variability in modern and archaeological maize. Journal of Archaeological Science 20: 25–40. https://doi.org/10.1006/jasc.1993.1002 Tilston, E.L., P. Ascough, M.H. Garnett, and M.I. Bird. 2016. Quantifying charcoal degradation and negative priming of soil organic matter with a 14C-dead tracer. Radiocarbon 58: 905–919. https://doi.org/10.1017/RDC.2016.45 Torres, J.A., and G. González. 2005. Wood decomposition of Cyrilla racemiflora (Cyrillaceae) in Puerto Rican dry and wet forests: a 13-year case study. Biotropica 37: 452–456. https://doi.org/10.1111/j.1744-7429.2005.00059.x Tovar, C., E. Breman, T. Brncic, D.J. Harris, R. Bailey, and K.J. Willis. 2014. Influence of 1100 years of burning on the central African rainforest. Ecography 37: 1139–1148. https://doi.org/10.1111/ecog.00697 USDA Soil Conservation Service. 1977. Soil survey of Humacao area of eastern Puerto Rico. https://www.nrcs.usda.gov/Internet/FSE_MANUSCRIPTS/puerto_rico/PR689/0/Humacao.pdf Accessed 10 July 2015. Waide, R.B., and A.E. Lugo. 1992. A research perspective on disturbance and recovery of a tropical montane forest. Pages 173-190 in: J.G. Goldammer, editor. Tropical forests in transition. Birkhäuser, Basel, Switzerland. https://doi.org/10.1007/978-3-0348-7256-0_12 Zimmerman, J.K., E.M. Everham III, R.B. Waide, D.J. Lodge, C.M. Taylor, and N.V. Brokaw. 1994. Responses of tree species to hurricane winds in subtropical wet forest in Puerto Rico: implications for tropical tree life histories. Journal of Ecology 82: 911–922. https://doi.org/10.2307/2261454 We are most grateful to H. Robles, M. Rivera, and I. Vicens for field assistance, and to Dr. A. Lugo for valuable comments. All research at the USDA Forest Service International Institute of Tropical Forestry was done in collaboration with the University of Puerto Rico. This study was financially supported by a cooperative project between the International Institute of Tropical Forestry, USDA-Forest Service, and the University of Puerto Rico (14-JV-11120101-018). G. González was supported by the Luquillo Critical Zone Observatory (National Science Foundation grant EAR-1331841) and the Luquillo Long-Term Ecological Research Site (National Science Foundation grant DEB-1239764). Department of Environmental Sciences, College of Natural Sciences, University of Puerto Rico, P.O. Box 70377, San Juan, PR, 00936-8377, USA Wei Huang, Xianbin Liu & Xiaoming Zou International Institute of Tropical Forestry, USDA Forest Service, Jardín Botánico Sur, 1201 Calle Ceiba, Río Piedras, PR, 00926-1119, USA Grizelle González College of Biology and the Environment, Nanjing Forestry University, 159 Longpan Road, Nanjing, 210037, Jiangsu, China Xiaoming Zou Wei Huang Xianbin Liu WH, XL, GG and XZ designed the experiments. WH and XL performed the experiments and analyzed the data. WH, XL, GG and XZ contributed to writing the paper. All authors reviewed the manuscript. All authors read and approved the final manuscript. Correspondence to Xiaoming Zou. The 78 accelerator mass spectrometry radiocarbon dates and stable carbon isotope data of soil charcoal. Charcoal was sampled in December 2015 in the northeastern subtropical dry forest, and in December 2014 in the southeastern subtropical dry forest of Puerto Rico, USA. The radiocarbon ages of 20 charcoal samples from the northeastern subtropical dry forest were determined by AMS (Accelerator Mass Spectrometry) in August and November 2016 at the Earth System Science Department, University of California, Irvine, USA; and the radiocarbon ages of 58 charcoal samples from the southeastern subtropical dry forest were determined by AMS in July and August 2015 at the Lawrence Livermore National Laboratory, California, USA. δ13C values were analyzed at Michigan Technological University. BP = before present. (DOCX 25 kb) Huang, W., Liu, X., González, G. et al. Late Holocene fire history and charcoal decay in subtropical dry forests of Puerto Rico. fire ecol 15, 14 (2019). https://doi.org/10.1186/s42408-019-0033-0 charcoal decay paleofire Puerto Rico forest soil charcoal
CommonCrawl
The running time of algorithm is at most $O(n^2)$ The problem is that if an algorithm is $O(n^2)$ then it is also $O(n^3)$ and $O(n^4), O(n^n), \ldots$ and the phrase 'at most' does not make sense in this situation. For this reason, I am not sure whether this statement is correct or not. time-complexity asymptotics edited Nov 2 '20 at 15:59 Derek Allums jsbcjsbc $\begingroup$ I'd say the problem with using "at most O(n^2)" is that there isn't a particularly well-known and well-defined meaning of "at most" in this context, which can lead to ambiguity (unless the surrounding context makes the meaning clear). I can think of a few possible things that one might want to imply with that. $\endgroup$ – Bernhard Barker Nov 1 '20 at 21:55 $\begingroup$ Technically it could be stated as "in the worst case, the running time of the algorithm is O(n^2) and higher". $\endgroup$ – Inertial Ignorance Nov 2 '20 at 1:37 The two phrases The running time is $O(n^2)$ The running time is at most $O(n^2)$ mean the same thing. This is similar to the following two equivalent claims: $x = y$ for some $y \leq z$. $x \leq y$ for some $y \leq z$. Why would we ever use "at most $O(n^2)$", then? Sometimes we want to stress that the bound $O(n^2)$ is loose, and then it makes sense to use "at most $O(n^2)$". For example, suppose that we have a multi-part algorithm, which we want to show runs in time $O(n^2)$. Suppose that we can bound the running time of the first step by $O(n)$. We could say "the first part runs in $O(n)$, which is at most $O(n^2)$". Yuval FilmusYuval Filmus $\begingroup$ "the first part runs in O(n), which is at most O(n^2)" - I wouldn't use "at most" there, but rather just "also" or say just it's O(n^2). "At most" used in that way would to me imply that you can't say it's O(n^3), but of course you can say that. $\endgroup$ – Bernhard Barker Nov 1 '20 at 21:50 $\begingroup$ You wouldn't, some would. It's a matter of style. $\endgroup$ – Yuval Filmus Nov 1 '20 at 22:02 $\begingroup$ I don't agree with this at all. To use a trivial example of Quicksort, you will often hear its running time described as O(n.log(n)), and that is indeed the average running time, but its worst case is O(n²), so it is not at all true to say that "the running time is O(n²)" is the same as saying the running time is at most O(n²)". What a person means by "running time" is contextual. I'd say it usually means average running time not worst case running time. $\endgroup$ – Fraser Orr Nov 2 '20 at 5:28 $\begingroup$ @FraserOrr As far as I'm aware, there are two different meanings in use for "running time" without specification. One is "the metric we care about in the situation where we apply it", which you mention here. The other is "the bound holds in any (worst, best, average, etc.) case", which means it is the worst case running time. I don't think "at most" helps much to disambiguate these two usages. I think it's better to be fully explicit and write "X case running time" if you don't want to be ambiguous. $\endgroup$ – Discrete lizard♦ Nov 2 '20 at 13:51 $\begingroup$ @Discretelizard for sure it would be better to be explicit, but I would say that if someone said that the running time was at most O(n²) it is fairly unambiguous, if not idiomatic, that they mean the worst case runtime. $\endgroup$ – Fraser Orr Nov 3 '20 at 19:33 "At most" might mean "at worst" i.e. that the worst-case time complexity is $O(n^2)$. For example one might say that "Quicksort is at most $O(n^2)$," meaning that no matter what infinite subset of the inputs you look at, the complexity on that subset is never more than $O(n^2)$. Theodore NorvellTheodore Norvell $\begingroup$ This points out the other phrasing you will see, "The runtime is on average $O(n^2)$" $\endgroup$ – Cort Ammon Nov 2 '20 at 1:26 My reading is that it's not necessarily a tight bound, ie. we know the algorithm is $O(n^2)$, but we don't know if it's (for example) $O(n^{1.99})$ BlueRaja - Danny PflughoeftBlueRaja - Danny Pflughoeft $\begingroup$ I wouldn't assume this is the right interpretation without more context, but it's definitely possible. $\endgroup$ – usul Nov 2 '20 at 2:47 "f(n) is in O(n^2)" means f(n) ≤ cn^2 for all large n and for some c > 0. Clearly if f(n) ≤ cn^2, then f(n) ≤ cn^3, cn^4 etc. So factually, "f(n) is in O(n^4)" is equally true. It just gives you much less information, so it may be less useful. If someone says "f(n) is at most O(n^2)", I would interpret that as "I proved it is in O(n^2), but I couldn't be bothered to check whether it is possibly in a more narrow class". For example, if your algorithm does Step 1 which takes O(n^3) and then Step 2, and you can prove that Step 2 is in O(n^2), that's good enough for all purposes, and you wouldn't bother checking if it's maybe in O (n^2 / log n) or in O (n^1.5). There's the class $\Theta(n^2)$ which means $c_1 n^2 ≤ f(n) ≤ c_2 n^2$ for all large n and for some $0 < c_1 < c_2$. Here you can't just substitute n^4 for n^2. And there is "asymptotic O(n^2)" which means f(n) is in O(n^2) and not in o(n^2), which means $c_1 n^2 ≤ f(n) ≤ c_2 n^2$ for infinitely many large n and for some $0 < c_1 < c_2$. Again, here you can't just substitue n^4. $\begingroup$ +1. I would also interpret it as "I've proven it to be $O(n^2)$ but I haven't checked if it could be e.g. $O(n \log n)$". $\endgroup$ – Fax Nov 2 '20 at 14:27 You can just view $\mathcal{O}(n^2)$ as an anonymous function drawn from the underlying class. The statement means: The running time of the algorithm is at most quadratic in the input length $n$. I do not think there is anything controversial or wrong here. ttnickttnick I understand it so, that, perhaps, saying "$f$ is at most $O(n^2)$", the speaker wants to emphasize, exaggerate his attitude to upper bound $O(n^2)$ as least one. Good point anyway. zkutchzkutch 1,07211 gold badge11 silver badge1010 bronze badges First, just because it's O(n^2) doesn't mean it's O(n^3) or higher. And sometimes "at most" is quite relevant. Consider, for example, Quicksort. In the real world it normally runs in very close to O(n log n) time, but for any given implementation you can devise an evil data set that makes it run in O(n^2) time. Certain naive implementations have a big problem with this as the evil data is simply already sorted data--add a few items and resort and it goes slow. I am sure there are other algorithms that are like this but none come to mind right now. Loren PechtelLoren Pechtel Not the answer you're looking for? Browse other questions tagged time-complexity asymptotics or ask your own question. Running time of partial algorithms Running time complexity of Binary Search Trees and Big-Omega Big-Oh and Growth Rate Running Time of Sorting Algorithm Computing order statistics $1,2,4,8,\ldots,n$ Determine if an NFA accepts infinite language in polynomial time How do you calculate the running time using Big-O notatation?
CommonCrawl
Newton's Laws Static equilibrium Atwood's machine Projectiles / 2-D motion Conservation laws xaktly | Physics | Mechanics Forces & Newton's laws Image: Godfrey Nailer's 1689 portrait of Newton, Wikipedia Commons Sir Isaac Newton (1622-1727), mathematician and physicist, studied motion and the mathematics required to describe it, developing (concurrently with Gottfried Leibniz) the field of calculus along the way. In this section, we'll explore Newtons laws of motion, three axioms that Newton proposed to characterize all motion and its changes due to applied forces. Central to Newton's ideas is the concept of force, so we'll start there. The four fundamental forces of nature We know of four fundamental forces in the universe. They are Weak nuclear force Gravity and the electrostatic force are described in their own sections. The two nuclear forces are involved in holding the positive and neutral charges together in the nuclei of atoms, despite the strong electrostatic repulsion between protons (like charges repel). Force at a distance vs. contact force These forces are all non-contact forces; we say that gravity, for example, exerts a "force at a distance." We are also interested in mechanical forces or contact forces, forces where the touching of two objects is necessary for the force to be transmitted. When you push someone on a swing, the push (a contact force) gets him/her going; gravity (force at a distance) keeps him/her swinging; and friction (air resistance, a contact force) slows him/her down. Force is a vector quantity, so forces add like all vectors do. We call the force resulting from the addition of two or more force vectors the net force. The net force on an object may be zero, even though there may be many forces acting on it. Contact force → touching Force at a distance → invisible force Newton's first law (inertia) There are a number of ways to state the first law. One of the most often encountered is "Objects in motion tend to remain in motion and objects at rest tend to stay at rest unless acted upon by an outside force." The first law is called the law of inertia. The idea is pretty clear, but I think we can say it a little more precisely and concisely like this: Newton's first law: Inertia Unless acted upon by a net force, an object will maintain its current velocity. The velocity vector – Review We ought to review a couple of concepts before we go on. First, recall that velocity is a vector. The only two important things about any vector are its length (the length of a velocity vector is number we call speed) and its direction. In the case of a velocity vector, the direction is the direction of motion. Remember that the length or magnitude of a velocity vector, which is speed, can be zero (no speed → not moving), so we cover objects at rest. Constant velocity means travel in a straight line, as we learned in another section, acceleration is any change in the velocity vector, which includes its direction. Net force Forces are represented graphically and mathematically by vectors. Recall that many forces may come to bear on an object, but unless those forces are "unbalanced" in some way, they will add to zero — no net force. The diagram below shows four equivalent and balanced forces acting on an object. If we add the four force vectors head-to-tail fashion (as we always add vectors), we see that there is no resultant vector (or, more properly, its length is zero), thus we have no net force in this situation. That makes sense. If two friends are each pushing with equal force on each of your shoulders, you might get squished, but you're not going anywhere. In the next picture, the forces act from the same directions, but they are unbalanced. F2 and F3 are weaker than F1 and F4. If we add those forces, we have an imbalance, leading to a net force vector (red) In the first case, the green box would remain stationary; in the second it would move in the direction of the net force vector (down and to the right), and the length of that net force vector represents the magnitude (strength) of the force. Vectors are awesome. Now we're in a good position to look at Newton's first law again. It takes a force to make anything different happen. An object that's sitting there will sit there forever unless the forces acting on it become unbalanced — unless a net force is generated. And an object that's moving at a constant velocity will not move faster or slower, nor will it change its direction unless acted upon by a net force. Newton's second law Newton's second law is perhaps the most important physical law in mechanics. It says that the acceleration of an object is proportional to the net force applied to it and inversely proportional to the mass (reflecting inertia). Force = mass × acceleration $$F = ma$$ The second law says that: A mass can't be accelerating if there is no net force acting on it. If a mass is accelerating, there must be a net force working on it. Acceleration is inversely proportional to mass; for a given force, the acceleration decreases as the mass increases. Acceleration is directly proportional to net force; the greater the force, the greater the acceleration. Newton's third law Newton's third law will also likely be familiar. You've probably heard "for every action there is an equal and opposite reaction." Again, here's a more precise way of saying it: Forces come in pairs. If an object A exerts a force (FA) on object B, then object B exerts an equal but opposite force (FB) on object A: FA = -FB Newton's third law will lead us directly to the law of conservation of momentum later, and it will help us to tie the forces of the universe together to give us a coherent picture of why familiar objects act and interact in the ways that they do. Object at rest: Normal force Consider the apple on the table. The object is not moving, so the magnitude (length) of its velocity vector is zero. Not moving means not accelerating, so according to the second law, $F = ma = m·0 = 0,$ the net force must be zero. But we know that the gravitational force $(F_g)$ is pulling the apple downward with force $F = -mg$ (I have chosen downward to the the negative direction, which is typical. As long as we're consistent, it really doesn't matter). Because the net force must be zero, there must be another force that exactly opposes the gravitational force. We call it the normal force (FN), and it is a result of the electrostatic repulsion between the electrons that make up the atoms of the apple and the table. $$F_{net} = F_g + F_N = 0 \: \text{ because } \: F_g = -F_N$$ The normal force is the reason gravity doesn't pull the apple right through the table. After all atoms are mostly empty space. J. Cruzan, 2012 Object with horizontal acceleration ← Now consider an object sliding across a surface like this one. The box has mass, so there is a downward force, -Fg, on it. It is sliding straight across a horizontal surface, so the normal force and Fg must be in balance: Fg = -FN . The object is accelerating to the right, so the left-right forces, FL & FR must be unbalanced, with FL > FR. Newton's second law tells us that F = ma, and we remember that the force here is always the net force. So the acceleration of the crate is just FNET/m, where FNET is the vector sum of the left and right force vectors (FNET = FL + FR, where the signs of FL and FR are opposite because they work in opposite directions. In the figure, I've just subtracted the two. Either way works as long as you keep track of the directions of forces). There is no acceleration in the up-down direction because those forces are balanced. By the way, in this situation, where the left force might be a pushing force, the rightward force might be friction. The friction force always opposes the motion. A forthcoming friction section will help make this clear. Example 2a Acceleration of a moving box Calculate the acceleration of a 20 Kg box moving across the floor if a 10 N force is exerted from the left, and that force is opposed by a 1 N friction force from the right (friction is a force that always opposes the motion). Solution: First let's sketch a free-body diagram of the problem. This will help to sort out the forces. Vertical forces The force of gravity on the box is $$ \begin{align} F_g &= mg \\[5pt] &= (20 \, Kg)\left(-9.81 \, \frac{m}{s^2}\right) \\[5pt] &= -196.2 \, N \end{align}$$ where we have set downward as the negative direction. The upward force exerted by the floor on the box (the normal force) must exactly balance the force of gravity, so it will be $F_N = 196.2 \, N$ Horizontal forces The net horizontal force is just the vector sum of the forces acting horizontally. These include a +10N force from the left (I'm arbitrarily assigning forces pointing to the right to be positive here, but as long as you're consistent, it really doesn't matter which direction you choose to be positive). That makes the 1 N friction force a negative force, so our total horizontal force is $$F_H = 10 \, N - 1 \, N = 9 N.$$ Now it's easy to calculate the acceleration from Newton's 2nd law: $$ \begin{align} F_{net} &= ma \; \color{#E90F89}{\longrightarrow} a = \frac{F_{net}}{m} \\[5pt] a &= \frac{9 \, N}{20 \, Kg} \\[5pt] a &= 0.45 \frac{m}{s^2} \end{align}$$ Now you might wonder why we even calculated the up/down net force. After all, the net force in that direction is zero (the box isn't moving in the up/down direction), and clearly it doesn't contribute to the left/right acceleration. Well, that's not entirely true, as we will see later when we discuss friction. Here we were given the friction force, $F_f = 1 \, N,$ but in general, the friction force will depend on the normal force. You might imagine that if we piled more weight on top of our box, it would increase the friction between the box and the floor. Object in motion at constant velocity Now this is a tricky one, but if you just go back to Newton's laws, you'll get it. Consider an object in motion at constant velocity, like the firefighter sliding down the pole →. If the velocity vector of the firefighter doesn't change, then because a = FNET/m, (and the firefighter certainly has nonzero mass), FNET must be zero. That means that any downward force must be perfectly balanced by some upward force. In this case, that upward force is the friction force. The firefighter can increase or decrease his friction with the pole by tightening or loosening his hold. A friction force less than the gravitational force will cause downward acceleration. A friction force exactly equal to the downward gravitational force leads to constant-velocity motion. A friction force greater than the gravitational force will cause the firefighter to slow and eventually stop because there is a net upward force. A friction force less than the gravitational force will cause the firefighter to accelerate down the pole because there is a net downward force. Image: http://lincoln.ne.gov/city/fire The ramp – a "free body diagram" Consider a ball rolling down a ramp like the one below. We almost always change our coordinate system from one in which the x-axis is parallel to level ground to one in which it parallels the ramp. It's much more convenient that way, and nature doesn't really care about our coordinate system anyway. We can then resolve the gravitational force vector, Fg into components parallel to our new axes: We can do better at listing the forces in such a diagram. The force pointing into the ramp, Fy is countered by the normal force, the force of the ramp pushing back on the ball. Likewise, the force pushing the ball down the ramp is countered by some frictional force, Ff. The whole free-body diagram, showing all forces in play, looks like this: You'll have to think about parallel lines and congruent angles to see that the angle between Fy and Fg is just the ramp angle, θ. The trigonometry that gives us our component vectors relative to Fg is $$ \begin{align} sin(\theta) &= \frac{F_x}{F_g} \\[5pt] \text{so } \: \: F_x &= F_g \, sin(\theta) \end{align}$$ $$ \begin{align} cos(\theta) &= \frac{F_y}{F_g} \\[5pt] \text{so } \: \: F_y &= F_g \, cos(\theta) \end{align}$$ The two ramps below illustrate why the acceleration of a rolling ball is greater down a steeper ramp: The component of the gravitational force, the only force present that makes the ball roll, is greater down the ramp when the ramp is steeper. Here are some examples of how to use Newton's laws in a variety of situations. A wagon is pulled by applying a force at an angle to the direction of motion. Calculate the resulting acceleration of the wagon. Minutes of your life: 2:44 This example is similar to the previous one, except that we put a load in the wagon and friction is a factor. I found an error in this example. I'll have it reposted soon. Undergoing editing A crate is pushed up a ramp with a force parallel to the ground. Friction is a factor. We calculate the net force up the ramp and then the resulting acceleration of the crate. This is a convoluted example that requires us to use many notions of mechanics, including Newton's laws and momentum. Very worthwhile to understand. Working on it ... stay tuned. An axiom is a statement that is regarded as being true or established, or self-evidently true. Axioms cannot be proven. A fundamental axiom is the transcendental property: Things that are equal to the same thing are equal to each other, or if a = b and c = b, then a = c. Magnitude means size. The magnitude of a vector is its length. We can also refer to orders of magnitude of numbers. In that case the magnitude is the number of zeros after or before the 1's digit.
CommonCrawl
${suggestion.title} ${suggestion.description || ''} `; return html; } } }]).on('autocomplete:selected', function(event, suggestion) { window.location.href = offsetURL(suggestion.path); }); // remove inline display style on autocompleter (we want to // manage responsive display via css) $('.algolia-autocomplete').css("display", ""); }) .catch(function(error) { console.log(error); }); }); Kepler Lounge Home About ☰ Kolmogorov's theory of Algorithmic Probability An introduction to Kolmogorov's theory of Algorithmic Probability, which clarifies the notion of entropy, and its application to the game of 20 questions. Aidan Rocke https://github.com/AidanRocke My greatest concern was what to call it. I thought of calling it 'information,' but the word was overly used, so I decided to call it 'uncertainty.' When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.'-Claude Shannon Motivation: You meet a French mathematician at the Reykjavik airport with a million things on his mind but at any moment he is only thinking of one thing. What is the minimum number of questions, asked in sequential order, that you would need to determine what he is thinking about? \[\begin{equation} \log_2 (10^6) \approx 20 \tag{1} \end{equation}\] So we might as well play a game of 20 questions. Moreover, the popularity of this game suggests that any human concept may be described using 20 bits of information. If we may solve this particular inductive problem, might it be possible to solve the general problem of scientific induction? Kolmogorov's theory of Algorithmic Probability: Using Kolmogorov's theory of Algorithmic Probability, we may apply Occam's razor to any problem of scientific induction including the sequential game of 20 questions. However, it is easy to forget that this required overcoming a seemingly insurmountable scientific obstacle which dates back to Von Neumann: This was accomplished through an ingenious combination of Shannon's theory of Communication with Alan Turing's theory of Computation. What emerged from this process is the most powerful generalisation of Shannon's theory so Kolmogorov Complexity and Shannon Entropy share the same units, and Kolmogorov Complexity elucidates the Shannon Entropy of a random variable as the Expected Description Length of a random variable. Furthermore, by assuming that the Physical Church-Turing thesis is true, Kolmogorov's theory of Algorithmic Probability formalizes Occam's razor as it is applied in the natural sciences. In the case of our game, we may formulate Occam's razor using the three fundamental theorems of Algorithmic Probability before approximating Kolmogorov Complexity, which is limit-computable, using Huffman Coding in order to solve the game of 20 questions. The implicit assumption here is that you, the second player, are able to encode the knowledge of the first player because you share a similar education and culture. The fundamental theorems of Algorithmic Probability: All proofs are in the Appendix. The first fundamental theorem of Algorithmic Probability: Kolmogorov's Invariance theorem clarifies that the Kolmogorov Complexity, or , of a dataset is invariant to the choice of Turing-Complete language used to simulate a Universal Turing Machine: \[\begin{equation} \forall x \in \{0,1\}^*, \lvert K_U(x)-K_{U'}(x) \rvert \leq \mathcal{O}(1) \tag{2} \end{equation}\] where \(K_U(x) = \min_{p} \{|p|: U(p) = x\}\). The minimal description \(p\) such that \(U \circ p = x\) serves as a natural representation of the string \(x\) relative to the Turing-Complete language \(U\). Moreover, as \(x\) can't be compressed further \(p\) is an incompressible and hence uncomputable string. This corresponds to a physicists' intuition for randomness and clarifies the reason why Kolmogorov Complexity is not computable. It follows that any piece of data has a necessary and sufficient representation in terms of a random string. The second fundamental theorem of Algorithmic Probability: Levin's Universal Distribution effectively formalizes Occam's razor which solves the age-old problem of scientific induction. Given that any uniquely-decodable code satisfies the Kraft-McMillan inequality, prefix-free Kolmogorov Complexity allows us to derive the Universal Distribution: \[\begin{equation} P(x) = \sum_{U \circ p = x} 2^{-K_U(p)} \leq 1 \tag{3} \end{equation}\] where the fact that \(U\) may simulate a prefix-free UTM implies that for two distinct descriptions \(p\) and \(p'\), \(p\) isn't a substring of \(p'\) and \(p'\) isn't a substring of \(p\). In a Computable Universe, given a phenomenon with encoding \(x \in \{0,1\}^*\) generated by a physical process the probability of that phenomenon is well-defined and equal to the sum over the probabilities of distinct and independent causes. The prefix-free criterion is precisely what guarantees causal independence. Furthermore, Levin's Universal Distribution formalizes Occam's razor as the most likely cause for that process is provided by the minimal description of \(x\) and more lengthy explanations are less probable causes. The third fundamental theorem of Algorithmic Probability: Using Levin's Coding theorem, the Algorithmic Probability of a data stream \(x \in \{0,1\}^*\) is defined in terms of its Kolmogorov Complexity: \[\begin{equation} -\log_2 P(x) = K_U(x) + \mathcal{O}(1) \tag{4} \end{equation}\] and due to Kolmogorov's Invariance theorem this probability is independent of our choice of our reference Universal Turing Machine, or Turing-Complete language, used to define \(x \in \{0,1\}^*\). In a Computable Universe, all probabilities are of a deterministic and frequentist nature and so all stochastic systems have ergodic dynamics. In fact, Levin's Coding theorem is the most concise summary of Occam's razor as it tells us that the typical cause of a process \(x\) is provided by the minimal description of \(x\). Hence, the most probable scientific hypothesis ought to be as simple as possible but no simpler. Although Algorithmic Probability is not computable, as Kolmogorov Complexity is not computable, Levin's Coding theorem provides us with a framework for machine learning as data compression as we may predict the future behaviour of a physical system by compressing its historical data. Maximum Entropy via Occam's razor: Given a discrete random variable \(X\) with computable probability distribution \(f\), it may be shown that: \[\begin{equation} \mathbb{E}[K_U(X)] = \sum_{x \sim f(X)} f_x \cdot K_U(x) = H(X) + \mathcal{O}(1) \tag{5} \end{equation}\] where \(H(X)\) is the Shannon Entropy of \(X\) in base-2. The Shannon Entropy of a random variable in base-2 is the Expected Description Length of this random variable. Hence, machine learning systems that minimise the KL-Divergence are implicitly applying Occam's razor. The game of 20 questions: The game of 20 questions is played between Alice and Bob who are both assumed to be trustworthy and rational. Thus, Alice and Bob both perform sampling and inference using Levin's Universal Distribution \(P\) over a shared set of symbols \(\mathcal{A}=\{a_i\}_{i=1}^n\). For the sake of convenience, we shall assume that \(\mathcal{A}\) represents entries in the Britannica Encyclopedia and \(\log_2 n \approx 20\). Bob selects an object \(a \in \mathcal{A}\) and Alice determines the object by asking Yes/No queries, encoded using a prefix-code, sampled from \(P(\mathcal{A})\). Alice's goal is to minimize the expected number of queries which is equivalent to determining: \[\begin{equation} X \sim P(\mathcal{A}), \mathbb{E}[K_U(X)] = H(X) + \mathcal{O}(1) \tag{*} \end{equation}\] In this setting, Shannon Entropy may be understood as a measure of hidden information and we shall show that (*) has a solution using Huffman Coding. The game of 20 questions, or Huffman Coding as an approximation of the Universal Distribution: The game of 20 questions is a special case of Huffman Coding: A set \(\mathcal{A}=\{a_i\}_{i=1}^n\) of symbols and a discrete probability distribution \(P=\{p_i\}_{i=1}^n\) which describes the typical frequency of each symbol. A prefix-free code \(C(P)=\{c_i\}_{i=1}^n\) where \(c_i \in \{0,1\}^*\) is the codeword for \(a_i\). Let \(\mathcal{L}(C(P))= \sum_{i=1}^n p_i \cdot \lvert c_i \rvert\) be the weighted length of code \(C\). We want to find \(C\) such that \(\mathcal{L}(C(P))\) is minimized. If the Universal Description Language \(U\) is prefix-free, we may note that: \[\begin{equation} X \sim P(\mathcal{A}), H(X) \sim \mathbb{E}[K_U(X)] = \sum_{i=1}^n p_i \cdot K_U(c_i) \leq \sum_{i=1}^n p_i \cdot \lvert c_i \rvert \tag{6} \end{equation}\] \[\begin{equation} X \sim P(\mathcal{A}), H(X) \leq \sum_{i=1}^n p_i \cdot \lvert c_i \rvert \tag{7} \end{equation}\] Hence, Huffman Coding is an entropy coding method that provides us with a robust approximation to the Expected Kolmogorov Complexity. Huffman Coding: A description of Huffman Coding algorithms: The technique works by creating a binary tree of nodes where leaf nodes represent the actual bytes in the input data. A node may be either a leaf node or an internal node. Initially, all nodes are leaf nodes which represent the symbol itself and its typical frequency. Internal nodes represent links to two child nodes and the sum of their frequencies. As a convention, bit '0' represents following the left child and bit '1' represents following the right child. The simplest Huffman Coding algorithm: The simplest sorting algorithm uses a priority queue where the node with the lowest probability is given the highest priority: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority(i.e. lowest probability) from the queue. Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. The reader may verify that the time complexity of this algorithm is \(\mathcal{O}(n)\). Does the solution found via Huffman Coding agree with our intuitions? Assuming that internal nodes are given labels \(v \in [1, 2 \cdot \lvert \mathcal{A} \rvert]\) while leaf nodes are given labels \(c_i \in \{0,1\}^*\) the information gained from any sequence of queries may be determined from the entropy formula: \[\begin{equation} S \subset [1, 2 \cdot \lvert \mathcal{A} \rvert], H(S) = \sum_{i \in S} - f_i \cdot \log_2 f_i \tag{8} \end{equation}\] where the order of the internal nodes may be determined by sorting the vertices \(i \in S\) with respect to their un-normalised entropies \(\{-\log_2 f_i \}_{i \in S}\). In principle, children of a parent node represent refinements of a similar concept so tree depth represents depth of understanding. This degree of understanding may be measured in terms of entropy \(- f_i \cdot \log_2 f_i\). Hence, we have a satisfactory solution to the Game of 20 questions. Zooming out, we may consider the ultimate impact of Kolmogorov's formalisation of scientific induction which Kolmogorov foretold [2]: Using his brain, as given by the Lord, a mathematician may not be interested in the combinatorial basis of his work. But the artificial intellect of machines must be created by man, and man has to plunge into the indispensable combinatorial mathematics.-Kolmogorov In fact, Kolmogorov's theory of Algorithmic Probability may be viewed a complete theory of machine epistemology. As for what may potentially limit the scope of machine epistemology relative to human epistemology, I recommend considering the big questions section of [8]. Proofs: Proof of Kolmogorov's Invariance theorem: The following is taken from [4]. From the theory of compilers, it is known that for any two Turing-Complete languages \(U_1\) and \(U_2\), there exists a compiler \(\Lambda_1\) expressed in \(U_1\) that translates programs expressed in \(U_2\) into functionally-equivalent programs expressed in \(U_1\). It follows that if we let \(p\) be the shortest program that prints a given string \(x\) then: \[\begin{equation} K_{U_1}(x) \leq |\Lambda_1| + |p| \leq K_{U_2}(x) + \mathcal{O}(1) \tag{9} \end{equation}\] where \(|\Lambda_1| = \mathcal{O}(1)\), and by symmetry we obtain the opposite inequality. Proof of Levin's Universal Distribution: This is an immediate consequence of the Kraft-McMillan inequality. Kraft's inequality states that given a sequence of strings \(\{x_i\}_{i=1}^n\) there exists a prefix code with codewords \(\{\sigma_i\}_{i=1}^n\) where \(\forall i, |\sigma_i|=k_i\) if and only if: \[\begin{equation} \sum_{i=1}^n s^{-k_i} \leq 1 \tag{10} \end{equation}\] where \(s\) is the size of the alphabet \(S\). Without loss of generality, let's suppose we may order the \(k_i\) such that: \[\begin{equation} k_1 \leq k_2 \leq ... \leq k_n \tag{11} \end{equation}\] Now, there exists a prefix code if and only if at each step \(j\) there is at least one codeword to choose that does not contain any of the previous \(j-1\) codewords as a prefix. Due to the existence of a codeword at a previous step \(i<j, s^{k_j-k_i}\) codewords are forbidden as they contain \(\sigma_i\) as a prefix. It follows that in general a prefix code exists if and only if: \[\begin{equation} \forall j \geq 2, s^{k_j} > \sum_{i=1}^{j-1} s^{k_j - k_i} \tag{12} \end{equation}\] Dividing both sides by \(s^{k_j}\), we find: Proof of Levin's Coding theorem: We may begin by observing that \(P(x)\) defines the 'a priori' frequency with which the prefix-free UTM \(U\) outputs \(x\) when the input to \(U\) is provided with a program of length \(K_U(p)\) generated by an Oracle capable of Universal Levin Search. From the vantage point of \(U\), the description of this program is incompressible or algorithmically random so it may as well be generated by: \[\begin{equation} K_U(x) \leq K_U(p) \leq K_U(x) + \mathcal{O}(1) \tag{14} \end{equation}\] coin flips, where the \(\mathcal{O}(1)\) term comes from Kolmogorov's Invariance theorem. Hence, we may observe the upper-bound: \[\begin{equation} -\log_2 P(x) \leq K_U(x) + \mathcal{O}(1) \tag{15} \end{equation}\] Likewise, we have the lower-bound: \[\begin{equation} -\log_2 P(x) \geq K_U(x) \tag{16} \end{equation}\] Thus, we may deduce that: \[\begin{equation} -\log_2 P(x) = K_U(x) + \mathcal{O}(1) \tag{17} \end{equation}\] Conversely, if we consider the natural correspondence between entropy(or information content) and algorithmic probability: \[\begin{equation} P(x) = \sum_{U \circ p = x} P(U \circ p = x) = \sum_{U \circ p =x} 2^{-K_U(p)} \tag{18} \end{equation}\] we may derive this result using the Shannon source coding theorem by considering the entropy of the source distribution \(P(x)\). While the entropy or information content of \(x\) is given by the shortest program of length \(K_U(x)\) that contains the necessary and sufficient axioms for generating \(x\), any program \(p\) satisfying \(U \circ p = x\) where: \[\begin{equation} |p| - K_U(x) > 0 \tag{19} \end{equation}\] must have a finite number of additional axioms or hypotheses that are independent of \(U\) and \(x\). As these additional axioms may be necessary with respect to a different Turing-Complete language \(U'\), we may bound the information content of \(p\) by applying Kolmogorov's Invariance theorem: \[\begin{equation} |p| - K_U(x) \leq \mathcal{O}(1) \tag{20} \end{equation}\] Hence, if we run this coin flipping experiment infinitely many times by sampling each program \(p\) in an i.i.d. manner from Levin's Universal Distribution: \[\begin{equation} \forall X \sim P(x), -\log_2 P(x) = \lim_{N \to \infty} \frac{-\log_2 P(X_1,...,X_N)}{N} = \lim_{N \to \infty} \frac{\sum_{i=1}^N -\log_2 P(X_i)}{N} = K_U(x) + \mathcal{O}(1) \tag{21} \end{equation}\] Proof of Expected Kolmogorov Complexity equals Shannon Entropy: Let's suppose we have a computable probability distribution, \[\begin{equation} x \sim f(X), \sum_x f(X=x) = 1 \tag{22} \end{equation}\] In a computable Universe, where all physical laws may be simulated by a UTM, \(f(X=x)\) is equivalent to the a priori frequency with which we observe the event \(X=x\). Hence, if \(P(x)\) is the Algorithmic Probability of \(x\) defined via Levin's Coding theorem: then the combination of Kolmogorov's Invariance theorem and the Shannon Source coding theorem allows us to determine that \(-\log_2 f(x)\) is independent of the choice of UTM that simulates the Universe. In fact, by sampling over independent causes \(p\) in an i.i.d. manner from Levin's Universal Distribution we may deduce that the a priori frequency must satisfy: \[\begin{equation} \forall X \sim P(x), -\log_2 f(x) = \lim_{N \to \infty} \frac{-\log_2 P(X_1,...,X_N)}{N} = K_U(x) + \mathcal{O}(1) \tag{24} \end{equation}\] Using the last two results, it follows that the Expected Kolmogorov Complexity of a random variable \(X\) is equivalent to its Shannon Entropy up to an additive constant: \[\begin{equation} \mathbb{E}[K_U(X)] = -\sum_{x \sim f(X)} f(x) K_U(X=x) = -\sum_{x \sim f(X)} f(x) \log_2 f(x) + \mathcal{O}(1) = H(X) + \mathcal{O}(1) \tag{25} \end{equation}\] Appendix B: Algorithmic Probability and the collapse of the Universal Wave Function Assuming that the evolution of the Quantum State of the Universe may be simulated by the Schrödinger equation, Kolmogorov's theory of Algorithmic Probability provides us with an elegant mathematical description of what a particular physicist observes during a Quantum Measurement. Interestingly, this description of non-computable measurements is in qualitative agreement with the Von Neumann-Wigner theory of Quantum Measurement. Breaking the Von Neumann chain: Through the paradox of the Von Neumann chain, the Von Neumann-Wigner interpretation of Quantum Mechanics posits that a conscious observer must lie beyond Quantum Computations: If an observer is a purely physical object, a more comprehensive wave function may now be expressed which encompasses both the state of the Quantum system being observed and the state of the observer. The various possible measurements are now in a superposition of states, representing different observations. However, this leads to a problem: you would now need another measuring device to collapse this larger wave function but this would develop into a superposition state. Another device would be needed to collapse this state ad infinitum. This problem-the Von Neumann chain-is an infinite regression of measuring devices whose stopping point is presumed to be the conscious mind. -Aeowyn Kendall Von Neumann's theory of Quantum measurement may thus be summarised as follows: (1) The Quantum state of a system generally evolves smoothly as dictated by the Schrödinger wave equation. (2) Otherwise, the Quantum State of this system collapses suddenly and sharply due to a conscious observer. If we consider that there is a Quantum State associated with the Universe and combine this with the Kantian view that the mind interprets and generates the world, then what a person observes may be defined by the Algorithmic Probability \(P(x|\hat{x})= 2^{-K_U(x\hat{x})}\) where \(\hat{x}\) denotes the Qualia or conscious state of a person and \(x\) denotes the observations of this person or one of many mutually exclusive Quantum branches of the Universal Wave Function. As Kolmogorov Complexity is not computable, what a particular physicist observes during a Quantum experiment may not be determined by a computable function such as the Schrödinger Wave equation. This has a number of important consequences: Computability defines the epistemological constraint on a Quantum Physicist that can only determine the average frequencies of experimental outcomes. Hence, the process of conscious observation must lie outside the realm of computable functions including the Schrödinger wave equation. On a planet of eight billion people there are eight billion Universes, each evolving with their own Universal Wave Function. A hypothetical Quantum Computer would need an Oracle in the loop. A. N. Kolmogorov Three approaches to the quantitative definition of information. Problems of Information and Transmission, 1(1):1–7, 1965 A.N. Kolmogorov. Combinatorial foundations of information theory and the calculus of probabilities. Russian Math. Surveys (1983). Peter Grünwald and Paul Vitanyí. Shannon Information and Kolmogorov Complexity. Arxiv. 2004. Grünwald, P. and Vitanyí, P. Kolmogorov Complexity and Information Theory: With an Interpretation in Terms of Questions and Answers. Journal of Logic, Language, and Information. 2003. Grünwald, P. and Vitanyí, P. Algorithmic Information Theory. Arxiv. 2008. L.A. Levin. Laws of information conservation (non-growth) and aspects of the foundation of probability theory. Problems Inform. Transmission, 10:206–210, 1974. Marcus Hutter et al. (2007) Algorithmic probability. Scholarpedia, 2(8):2572. Walter Kirchherr, Ming Li and Paul Vitányi. The Miraculous Universal Distribution. The Mathematical Intelligencer. 1997. Yuval Dagan, Yuval Filmus et al. Twenty (simple) questions. Arxiv. 2017. Marcus Hutter. A theory of Universal Intelligence based on Algorithmic Complexity. 2000. Shane Legg and Marcus Hutter. Universal Intelligence: A Definition of Machine Intelligence. 2007. Aeowyn Kendall. Quantum Mechanics and its Broader Implications: The Von Neumann-Wigner interpretation. 2019. Hugh Everett. The theory of the Universal Wave Function. Princeton University Press. 1957. For attribution, please cite this work as Rocke (2022, Oct. 6). Kepler Lounge: Kolmogorov's theory of Algorithmic Probability. Retrieved from keplerlounge.com @misc{rocke2022kolmogorov's, author = {Rocke, Aidan}, title = {Kepler Lounge: Kolmogorov's theory of Algorithmic Probability}, url = {keplerlounge.com},
CommonCrawl
Orbital mechanics for engineering students curtis pdf download Salaza83418 When seen from above – a view seen during an equinox for the first time from the Cassini space probe in 2009 – they receive very little sunshine, indeed more planetshine than light from the Sun. A tandem or serial stage is mounted on top of another stage; a parallel stage is attached alongside another stage. The result is effectively two or more rockets stacked on top of or attached next to each other. In orbital mechanics (subfield of celestial mechanics), Gauss's method is used for preliminary orbit determination from at least three observations (more observations increases the accuracy of the determined orbit) of the orbiting body of… The frame is centered at the focus of the orbit, i.e. the celestial body about which the orbit is centered. The unit vectors p ^ {\displaystyle \mathbf {\hat {p}} } and q ^ {\displaystyle \mathbf {\hat {q}} } lie in the plane of the orbit. H. D. Curtis "Orbital Mechanics for Engineering Students" (see chapter 11: "Rocket Vehicle Dynamics") 2014 [14] W. Johnson "Contents and commentary on William Moore's a treatise on the motion of rockets…" 1995 [15] No Ether - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Mechanical Engg.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. syllabus Analysis of an Inflatable Gossamer Device to Efficiently de-Orbit - Free download as PDF File (.pdf), Text File (.txt) or read online for free. define syllabus Nanotech detail.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. H. D. Curtis "Orbital Mechanics for Engineering Students" (see chapter 11: "Rocket Vehicle Dynamics") 2014 [14] W. Johnson "Contents and commentary on William Moore's a treatise on the motion of rockets…" 1995 [15] The Willard Gibbs Award, presented by the Chicago Section of the American Chemical Society, was established in 1910 by William A. Converse (1862–1940), a former Chairman and Secretary of the Chicago Section of the society and named for… flyby, due to the planet relative orbital velocity is added to the spacecraft's velocity on its way out. [5] Curtis H. D. Orbital Mechanics for Engineering Students. To date, orbital launches have been performed by either fully or partially expendable multi-stage rockets. In order for the passenger to stay inside the car, a force must be exerted on the passenger. This force is exerted by the seat, which has started to move forward with the car and is compressed against the passenger until it transmits the… ^ Joseph F. Shelley (1990). 800 solved problems in vector mechanics for engineers: Dynamics. McGraw-Hill Professional. p. 47. ISBN 978-0-07-056687-3. In astronautics and aerospace engineering, the bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver. For this reason, Italian physicists were slow in embracing the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him. 978-0-08-100194-3 Aerodynamics for Engineering Students (Seventh Edition) 2015 978-0-08-096905-3 Aircraft Structures for Engineering Students (Fifth Edition) 2012 978-0-08-100914-7 Aircraft Structures for Engineering Students (Sixth Edition… Detailed Syllabus - Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or read online for free. Anna univ 3 to 5 download pattern mechanics. 2017 Springer International Publishing AG. It Is that you are in USA. Furthermore, the charge value of the Coulomb propulsion system required for such formation was obtained. Considering Under actuation of one of the formation satellites, the fault-tolerance approach is proposed for achieving mission goals. syllabus Nanotech detail.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. H. D. Curtis "Orbital Mechanics for Engineering Students" (see chapter 11: "Rocket Vehicle Dynamics") 2014 [14] W. Johnson "Contents and commentary on William Moore's a treatise on the motion of rockets…" 1995 [15] The Willard Gibbs Award, presented by the Chicago Section of the American Chemical Society, was established in 1910 by William A. Converse (1862–1940), a former Chairman and Secretary of the Chicago Section of the society and named for… Velocity along the orbit is entirely angular, and since $h = rv_\bot$, then solving for $h$ and combining with above gives \begin{equation} v_\bot = \sqrt{\mu/r}. \end{equation} Written by Howard Curtis, Professor of Aerospace Engineering at Embry-Riddle Orbital Mechanics for Engineering Students is a crucial text for students of The downloaded Appendix D for the MATLAB scripts is quite comprehensive. In astrodynamics or celestial mechanics, an elliptic orbit or elliptical orbit is a Kepler orbit with an eccentricity of less than 1; this includes the special case of a circular orbit, with eccentricity equal to 0. Orbit determination is the estimation of orbits of objects such as moons, planets, and spacecraft. One major application is to allow tracking newly observed asteroids and verify that they have not been previously discovered. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. assassins creed origins pc full version free download download driver sound hp 840 i5 hdd-180g monster legends mod download free teach yourself french pdf download miracle box pc download qysejdy Download the burrowers 2008 torrent Lust epidemic latest version download compressed file Minecraft xbox one edition free download Real steel world boxing apk download Food corporation of india logo download Devil drop free download pdf Sonic mania steam version free download Jurassic world torrent download 1080p Eldewito halo 3 pc download How to download torrent from rarbg Mechwarrior free download pc Gta tbogt ps3 torrent download no password Glasswire old version downloads Idp smart 30 s driver download Harry potter a hogwarts mystery hack apk download Google drive .move downloaded without any app name Free downloadable toad gif A sftware that download convert word into pdf Ecalc version 7.10 download A fond movie torrent download
CommonCrawl
Effect of household water treatment with chlorine on diarrhea among children under the age of five years in rural areas of Dire Dawa, eastern Ethiopia: a cluster randomized controlled trial Ephrem Tefera Solomon ORCID: orcid.org/0000-0003-2035-436X1,2, Sirak Robele1, Helmut Kloos3 & Bezatu Mengistie2 Infectious Diseases of Poverty volume 9, Article number: 64 (2020) Cite this article Diarrheal disease is a leading cause of child mortality and morbidity worldwide. Household water treatment with chlorine significantly reduces morbidity due to waterborne diseases. However, the effect of point-of-use (POU) water treatment in improving the quality of water in areas where POU is not provided free of charge and the effectiveness of home visits in inspiring household members to use POU regularly have not been studied. The objective of this study was to evaluate the effectiveness of drinking water disinfection by chlorination on diarrheal disease reduction among children under the age of 5 years in rural eastern Ethiopia. A cluster randomized controlled trial was carried out in rural Dire Dawa from October 2018 through January 2019. The 405 households were randomized to intervention and control arms and intervention materials were distributed after conducting a baseline survey. This trial evaluated the effectiveness of household drinking water disinfection by chlorination in reducing incidence of diarrhea among children under the age of 5 years. Intervention households received 1.2% sodium hypochlorite with demonstration of its proper use. Participants in the control households continued with their usual habits of water collection and water storage. Generalized estimation equations (GEE) with log link Poisson distribution family and exchangeable correlation matrix was used to compute crude incidence rate ratio (IRR), adjusted IRR and the corresponding 95% confidence intervals. In the intervention households, in total, 281 cases of diarrhea were documented (8.7 cases per 100 person-weeks observation); in the control households, in total 446 cases of diarrhea were documented (13.8 cases per 100 person-weeks observation). A 36.0% (adjusted IRR = 0.64, 95% CI: 0.57–0.73) reduction in incidence of diarrhea was observed in the intervention arm when compared with the control arm. The highest and the lowest reductions were obtained in children of age ranges 1 to 2 years and 3 to 4 years, 42.7 and 30.4%, respectively. Adherence to the intervention was 81.3% as measured by free residual chlorine test. In rural areas where diarrhea is the second leading cause of morbidity, water chlorination at the household level using liquid bleach considerably reduced episodes of diarrhea among children under the age of 5 years. Therefore, chlorinating drinking water at the household level may be a valuable interim solution for reducing the incidence of diarrheal diseases until potable water is made accessible to the majority of the population in Dire Dawa Administration and other Ethiopian communities. PACTR, PACTR201807815961394. Registered 16 July 2018, www.pactr.org Diarrhea was responsible for an estimated 533 768 deaths among children younger than 5 years globally in 2017, a rate of 78.4 deaths per 100 000 children [1]. The problem is aggravated in children living in rural rather than urban areas [2]. A recent systematic review and meta-analysis examining the prevalence and determinants of diarrhea among under-five children in Ethiopia indicated that children from rural households were 1.9 times more likely to have diarrhea than their urban counterparts [3]. Unsafe drinking water is a major cause of diarrhea deaths and disease, especially for young children and vulnerable populations in low-income countries [4]. Furthermore, the majority of the world's population does not have access to water piped into their homes and must carry, transport, and store water within their homes. In these situations, recontamination of drinking water is often significant and is increasingly recognized as an important public health issue [5]. Unhygienic handling of water during transport or within the home can contaminate previously safe water. WHO estimates that 94.0% of diarrhea cases are preventable through modifications to the environment, including increasing the availability of clean water and improving sanitation and hygiene [6]. Therefore, promoting household water treatment and safe storage (HWTS) helps vulnerable populations to take charge of their own water security by providing them with the knowledge and tools that enable them to treat their own drinking water [7]. Interventions to improve water quality are generally effective in preventing diarrhea and effectiveness is usually positively associated with compliance [8]. According to United Nations International Children's Emergency Fund report, point-of-use water treatment with chlorine solution has been estimated to reduce diarrheal disease by 29.0% [9]. However, mismanagement of excess chlorine reacts with precursors in the water that forms disinfectant-by-products (DBPs), like trihalomethane (THM) and haloacetic acids (HAAs) which cause the risk of cancer [10]. Various intervention studies achieved reductions in incidence and longitudinal prevalence of diarrhea among children under 5 years by 11.0 to 90.0% [11,12,13,14,15,16,17,18,19,20]. Conversely, some interventions failed to reduce diarrhea levels [21,22,23,24]. According to Dire Dawa Regional Health Bureau, in 2016 diarrhea was the second leading cause of morbidity next to upper respiratory infections, in children under the age of 5 years, affecting 19 194 (30.8%) children (Dire Dawa Administration Regional Health Bureau 2015/2016 Budget Year Annual Report, unpublished). Populations with microbiologically safe piped water tend to have the lowest mortality rates from diarrheal disease [25]. However, piped water supplies are still scarce in many communities in low-income countries. Thus, until these services become widely available in these countries, POU water treatment is a potential interim solution to the problems caused by diarrhea [25]. However, the effect of the POU treatment in improving water quality against post-source contamination, the magnitude of the intervention effect in areas where POU is not provided free of charge, and the effectiveness of home visits in inspiring household members to use POU regularly have not been determined. Determining these relationships may aid the effort to upscale point-of-use to a larger community level. Hence, the objective of this study was to evaluate the effectiveness of drinking water disinfection by chlorination at the household level in diarrheal disease reduction among children under the age of 5 years in rural parts of Dire Dawa, eastern Ethiopia. Dire Dawa, one of the two federal cities, is a commercial and industrial center located 505 km east of Addis Ababa on the Addis Ababa–Djibouti railroad. Dire Dawa Administration consists of 9 urban and 38 rural kebeles (the smallest administrative units). According to the Dire Dawa Water, Mine and Energy Bureau, safe drinking water in rural areas is supplied by protected springs, protected shallow wells, and deep wells. Safe drinking water reached 71.8% of the area in 2017. Thirty-three health posts and seven health centers render health services to the rural population (Dire Dawa Administration Health Bureau 2017, 6 months report, unpublished). The projected population of Dire Dawa Administration, Ethiopia, in 2018 (the year in which the data were collected) was 479 000, of which 240 000 were males and 239 000 were females; the male to female ratio was nearly 1:1. Of these, 176 000 (36.7%) lived in rural areas and the rural population are sparsely populated [26]. Rural households had an average of 4.9 persons per household. According to the 2017 Regional Health Bureau reports, there were 34 150 households in the four districts of rural Dire Dawa with 20 118 children under the age of 5 years. The latrine coverage of the administration was 54.9%, and rural households stored their drinking water in 20 l jerry-cans (Dire Dawa Administration Regional Health Bureau: 2017 Facility Information, unpublished). Source and study population The source population consisted of households with at least one child under the age of 5 years in the 38 rural kebeles of the four districts; the study population consisted of households with at least one child under the age of 5 years selected randomly from two kebeles. Households having at least one child less than 5 years of age were included. Households having mothers/caregivers who were severely ill and unable to respond to the questionnaire, households having under-five children with persistent diarrhea, and households with children younger than 6 months were excluded. Trial design and procedure This study used a cluster randomized controlled parallel set trial to evaluate the effectiveness of household chlorination on reducing diarrhea incidence in rural Dire Dawa from October 2018 through January 2019. There are four districts consisting of 38 kebeles in rural Dire Dawa. Each kebele was divided into sub-kebeles (clusters having distinct neighborhoods with defined geographical boundaries). Two districts were randomly selected. From these two districts, six kebeles consisting of 50 clusters were identified for this study. Of these, eight clusters from two kebeles were selected randomly. Households appropriate for this study had at least one under-five child. In households with more than one under-five child, the index under-five child (the child to be studied) was selected by lottery. The participant collection procedure is illustrated in Fig. 1. Selection of participants and the follow-up flow for the community randomized controlled trial, rural Dire Dawa, eastern Ethiopia, September 2018 through January 2019 The principal investigator held a meeting with community leaders of the recruited kebeles to randomize clusters to the intervention arm (IA) or control arm (CA). Each cluster was given a unique identifier on a piece of papers and the papers were folded and placed into a jar. Then, an equal number of papers coded with "IA" or "CA" was placed into another jar. In front of the community leaders, two anonymous individuals (individuals who were not participating in coding) drew papers, one at a time from the two jars and draws from the two jars were matched; the drawing and matching continued until the all papers were drawn and matched. Those clusters whose unique identifiers were matched with "IA" were randomized to the intervention group and those matched with "CA" were randomized to the control group. Cluster randomization is often advocated to minimize treatment "contamination" between intervention and control participants [27]. The intervention and control clusters in this trial were far apart, lessening the likelihood of treatment contamination between intervention and control households. Furthermore, in order to control information cross-contamination, the intervention providers were not aware of the purpose of providing the intervention materials. The study households were enrolled in April 2018 and allocated in September 2018, and the study was conducted from October 2018 through January 2019. Baseline data were collected from the two kebeles, 99.5% of the households in the intervention arm and 99.0% in the control arm completed the trial. The data collectors conducted a baseline survey after obtaining informed written consent from the mother or caregiver of the under-five child in each household. Finally, bottles of WaterGuard (one bottle per month), used as the water treatment intervention, were distributed at the cluster level to each household in the intervention arm. Cluster selection Two of the six eligible kebeles were selected by a simple random sampling technique. These two kebeles had a total of 25 clusters. Eight clusters were selected, again by simple random sampling. The criteria for selecting clusters were as follows: they did not need to be close together; and they had to contain a minimum of 51 households with at least one under-five child. In this study, sub-kebele is considered as a cluster unit. Sample size and sampling procedure This cluster randomized controlled trial assessed the effect of household chlorination on reduction of childhood diarrhea. In line with this, the sample size was calculated after considering 0.35 as the magnitude of the effect size. This figure means the researchers looked for a 35.0% reduction in the incidence of diarrhea in the intervention arm (receiving water, sanitation, hygiene educational messages and hand washing with soap) compared to the control arm; it was taken from a recent interventional study conducted in Jigjiga District, Somali Region [28]. Furthermore, the following were taken into consideration to come up to a calculated sample size of under-five children: 80% power, 5% significance level, 95% confidence level, 10% contingency for non-responses, and a design effect of 4 from clustering; the calculations yielded a sample size of 204 under-five children per arm. Design effect is used as an adjustment to the sample size due to the multi-stage sampling procedure used in this trial. To achieve the calculated sample size, a multi-stage sampling procedure was used to recruit the participants from the rural area of Dire Dawa. Two of the six eligible kebeles were selected by a simple random sampling technique. From these, eight of the 25 sub-kebeles were selected by simple random sampling. Finally, participant households were selected from "family folders" that were regularly updated by health centers and health posts again by simple random sampling (Fig. 2). Cluster selection flow for the community randomized controlled trial, rural Dire Dawa, eastern Ethiopia, September 2018 through January 2019 Sample size calculation for clusters To calculate the number of clusters required, a simple sample size calculation method for cluster randomized trials developed by Hayes and Bennett (1999) was used. For an individually randomized trial, a standard formula requires y person-weeks in each arm, where $$ \mathrm{y}={\left({\mathrm{z}}_{\upalpha /2}+{\mathrm{z}}_{\upbeta}\right)}^2\left({\uplambda}_0+{\uplambda}_1\right)/{\left({\uplambda}_0-{\uplambda}_1\right)}^2 $$ In this formula zα/2 and zβ are standard normal distribution values corresponding to upper tail probabilities of α/2 and β, respectively. This sample size provides a power of 100(1–β)% of obtaining a significant difference (P < α and a two-sided test), assuming that the true (population) rates in the presence and absence of the intervention are λ1 and λ0, respectively. After considering the following entities – zα/2 = 1.96, zβ = 0.84, λ0 = 10.4 [18], (i.e., incidence of diarrhea in the control arm), λ1 = 4.5 [18], (i.e., incidence of diarrhea in the intervention arm)—the calculated person-weeks (y) becomes 3.36. For a cluster randomized trial with 3.36 person-weeks of follow up in each cluster, the number of clusters required (c) is given by the following: $$ \mathrm{c}=1+{\left({\mathrm{z}}_{\upalpha /2}+{\mathrm{z}}_{\upbeta}\right)}^2\left[\left({\uplambda}_0+{\uplambda}_1\right)/\mathrm{y}+{\mathrm{k}}^2\left({\uplambda^2}_0+{\uplambda^2}_1\right)\right]/{\left({\uplambda}_0-{\uplambda}_1\right)}^2 $$ In this formula, k is the coefficient of variation (SD/mean) of the true rates between clusters within each arm. As a rough guideline, experience drawn from several field trials suggests that k is often ≤ 0.25 and seldom exceeds 0.5 for most health outcomes [29]. Hence, the calculated number of clusters, after considering the entities given above, became 3.53, which is approximately four clusters in each arm and eight in total. Hence four clusters were used for the intervention arm and four for the control arm. Intervention providers supplied the intervention material bleach (sodium hypochlorite) to each participating household in the intervention arm (one bottle every month) for home water disinfection regularly for 16 weeks from October 2018 through January 2019. They also explained and demonstrated how to treat water using the sodium hypochlorite. The demonstration of how to make water safe using sodium hypochlorite followed CDC instructions: add one cupful of sodium hypochlorite (1.2% chlorine) to 5 gal or 20 L of water in a jerrican; cover the jerrican and shake it until the sodium hypochlorite is completely mixed with the water; wait 30 min to render the water safe to drink [30]. The concentration of chlorine present in most disinfected drinking water ranges between 0.2 and 1 mg/L [31]. The intervention providers instructed the mothers/caregivers to keep the bottle of WaterGuard out of reach of children. Intervention providers regularly checked the depletion of the WaterGuard from the bottle given to each household so the bottle could be replaced. The shelf life of the distributed WaterGuard ended December 2019. Intervention providers encouraged study participants to drink only treated water but neither encouraged nor discouraged hand washing nor other preventive actions that can decrease the occurrence of diarrhea. In the control households, study participants were allowed to continue with their usual habits of water collection and water storage. Intervention providers neither encouraged nor discouraged drinking water treated with WaterGuard. Each participating household in the control arm was visited by data collectors once every 2 weeks to collect information about the occurrence or non-occurrence of diarrhea among under-five children. In this trial, incidence of diarrhea was calculated as the number of new diarrhea episodes divided by the total person-time (i.e. person-weeks of observation) [32]. Diarrhea was defined as passage of three or more loose or liquid stools in a day [33]. An occurrence of diarrhea was considered a new episode if the child passed 3 days without symptoms of diarrhea [34]. Operational definition of terms Control arm: Group of clusters provided with no household water treating product and allowed to continue with their customary practices. Effect: The influence of treating drinking water with a water treatment product on the incidence of diarrhea in under-five children. Household water treatment with chlorine: Treatment of drinking water using bleach (sodium hypochlorite) at the household level. Improved drinking water: Drinking water obtained from a pipe, public tap, borehole, protected spring, protected dug well, or rainwater. Intervention arm: Group of clusters provided with point-of-use water treatment product to treat their drinking water. Point-of-use water treatment: Treatment of drinking water for household use at the point-of-use. Baseline information about diarrhea-related variables such as environmental, socio-demographic and behavioral factors and two-week prevalence of diarrhea was collected using a structured questionnaire. The questionnaire was first prepared in English and then translated to the local language, Afaan Oromo, and then translated back to English to maintain consistency in the two versions. Data were collected using Afaan Oromo. Field workers were eight data collectors, eight intervention providers, and two supervisors. The data collectors and intervention providers were local residents of their respective kebeles. They had completed grade 10 and speak the local language of the community. The supervisors were local residents of their respective kebeles, high school graduates who spoke the local language of the community. All field workers received training from the first author on techniques of interviewing and proper data collection for 2 days before the actual work. The data collection tool was pre-tested on the second day of training in a nearby kebele that was not included in the study and amendments were made where needed. The main response variable of this study was diarrhea in under-five children. Data collectors collected information every 2 weeks for a period of 16 weeks, a total of eight times, according to the following parameters: occurrence of diarrhea, water treatment practices, and free residual chlorine. The secondary response variable of this study was study participants' adherence to the intervention. The intervention material was sodium hypochlorite (a chemical compound with the formula NaOCl) distributed under the name WaterGuard, an unstable salt produced usually in aqueous solution and used as a disinfecting agent. Adherence to WaterGuard use was checked on unannounced days regularly once every 2 weeks by testing a drinking water sample for residual free chlorine using the N,N-diethyl p-phenylenediamine (DPD) colorimetric method (WAGTECH DPD1); any level greater than or equal to 0.2 mg/L was considered to be adequate adherence to treatment. For microbiological water quality analysis, 250 ml water samples were collected at baseline and at the end of the study from drinking water storage containers of 10.0% of the participating households selected by simple random sampling. Sterile bottles were used and 1.0% sodium thiosulfate was added to the water samples from both the intervention and control arms to neutralize any chlorine present in the water. Samples were transported to Dire Dawa Water Supply and Sanitation Authority Laboratory in an ice box for processing within 4 hours of collection. Membrane filtration was used for detection and quantification of Escherichia coli from the water samples collected. To control the quality of the test, sterile water (negative control sample) was run with the collected water samples in the membrane filtration technique. Of the three indicator bacteria used for indication of water contamination (total coliforms, fecal thermo-tolerant coliforms, and E. coli), E. coli is regarded as the most reliable indicator of fecal contamination [35]. The membrane filter technique can be used to test relatively large numbers of samples and yields results more rapidly than the multiple fermentation tube technique [36]. To ensure the quality of data, standardized tools and procedures were used. Adequate training was given to data collectors, intervention providers, and supervisors on techniques of interviewing, observation, and data recording, specific techniques for promoting drinking water treatment, and general approaches to community motivation and supervision. The expiration dates of the laboratory reagents, DPD tablets, and bottles of WaterGuard were checked. Proper drinking water treatment using sodium hypochlorite was demonstrated to intervention households by the intervention providers. Data on occurrence or non-occurrence of diarrhea, use of WaterGuard, and free residual chlorine were collected once every 2 weeks for 4 months. Data for the intervention study were collected from 405 households for 4 months. Before data analysis, all entered data were cleaned by carrying out a frequency run procedure to identify and re-enter data missed in the original questionnaires. Water samples collected from participating households at baseline and at the end of the study were labeled using each household's unique identifier and the results entered accordingly. Baseline and follow up visit data forms were checked for completeness and consistency before entry. The cleaned data were entered into EpiData Version 3.1 (EpiData Association, Odense, Denmark) and exported to STATA version 15.0 (StataCorp LP, College Station, TX) for analysis. All study participants were analyzed in the group to which they were randomized (i.e., by intention-to-treat analysis) in order to compare the incidence of diarrhea among children under the age of 5 years between intervention and control arms. The baseline data for intervention and control arms were analyzed and compared. Generalized estimation equations (GEE) with log link Poisson distribution family and exchangeable correlation matrix was used to compute crude incidence rate ratio and the corresponding 95% confidence intervals. GEE was also used to compute adjusted incidence rate ratio after controlling for confounding variables [37]. Characteristics of intervention and control groups In this trial, 204 households were assigned to the water treatment arm (intervention group) and 204 were assigned to the control arm (control group). Of these, 203 and 202 households in the intervention and control groups, respectively, completed 16 weeks of follow-up. One household from the intervention group and two households from the control groups refused to participate. Data were collected from 405 households at baseline. The mean family size per household was 5.0. The median age of the mothers/guardians was 30 (IQR: 28–34) years and median age of the children under 5 years of age was 28 (IQR: 18–43) months. The average cluster size was 51 households with at least one under-five child. Of the under-five children, 50.4% were males and 15.6% were not breastfed. At baseline no statistically significant socio-demographic difference was observed between the intervention and control households (Table 1). Table 1 Baseline demographic, environmental and socioeconomic characteristics of the study population in rural Dire Dawa, eastern Ethiopia, 2018 With regard to environmental sanitation characteristics, 37.3, 17.8, and 17.8% of households had a latrine, a refuse disposal facility, and soap in the home, respectively. About 15.3% households used an unimproved water source for drinking; and the majority of these (13.6%) obtaining drinking water from a stream. In 79.3% of households, the water storage container was narrow necked. Intervention and control households showed no significant difference in most of their baseline environmental characteristics (Table 1). Prior to the intervention the two-week prevalence of diarrhea was 24.3% in the control group and 24.6% in the intervention group. With regard to economic indicators, only 7.2 and 7.9% of households possessed a watch and a television, respectively. Similarly, there were no differences between intervention and control arms with regard to diarrheal disease and socio-economic characteristics (Table 1). Incidence of diarrhea Under-five children living in households that received sodium hypochlorite (bleach) evidenced fewer cases of diarrhea than children living in the control households. In the intervention households, a total of 281 cases of diarrhea were documented (8.7 cases per 100 person-weeks observation), but in the control households a total of 446 cases of diarrhea were documented (13.8 cases per 100 person-weeks observation). In the entire study period children under the age of 5 years in the intervention arm experienced diarrhea on 1.3% of days whereas those in the control arm experienced diarrhea on 2.0% of days. Figure 3 illustrates diarrhea occurrence at each of the eight observation points during the 16 weeks of the study. Throughout the follow up trial, fewer children in the intervention arm were experienced diarrheal episodes than control arm. Number of episodes of diarrhea versus every two-week observation in rural Dire Dawa, eastern Ethiopia, from October 2018 through January 2019 The effect of household chlorination on reduction of childhood diarrhea differs in the different age groups of the children. The highest reduction was obtained in children aged range 1 to 2 years (42.7%), whereas the lowest reduction was observed in children aged range 3 to 4 years (30.4%) (Table 2). Table 2 Number of episodes and incidence of diarrhea in control and intervention arms by age group of under-five children in rural Dire Dawa, eastern Ethiopia, from October 2018 through January 2019 Generalized estimating equations (GEE) with exchangeable correlation matrix and log link Poisson distribution family was employed to control for potential confounders in the multivariable analysis. Consequently, after adjusting for age of the child, gender of the child, child breastfeeding, family size, presence of refuse disposal facility, availability of latrine, availability of handwashing facility, and presence of soap in the home, under-five children in the intervention group had lower risk of diarrhea (adjusted IRR = 0.64, 95% CI: 0.57–0.73). A 36.0% lower incidence of diarrhea was observed in the intervention group in comparison to the control group (Table 3). Table 3 Multivariable analysis of the effect of water treatment intervention on the incidence of diarrhea among children under the age of five years in rural Dire Dawa, eastern Ethiopia, from October 2018 through January 2019 Drinking water was sampled for microbial testing twice from 10.0% of participating households, at the beginning and end of the study period. At the beginning of the study, 85.7% of the samples from the intervention households and 80.0% of the samples from the control households were contaminated and no significant E. coli difference was detected (P = 0.426). However, at the end of the study period, 38.1% of the samples from the interventional households and 85.0% of the samples from the control households were contaminated and a significant difference in E. coli counts was detected (P = 0.018). Counts of E. coli were also compared before and after the intervention. In the intervention households, E. coli counts were significantly lower at post-intervention (P = 0.003). However, in the control households, no significant difference in E. coli counts was detected before intervention and after intervention (P = 0.692). Adherence to the intervention In the intervention group, free residual chlorine was measured by the data collectors once every 2 weeks on a regular basis but on unannounced days throughout the study period. On the average, 81.3% of the drinking water samples examined had free residual chlorine of ≥ 0.2 mg/L. At baseline, eight households (4.0%) in the control arm and 11 households (5.4%) in the intervention arm were treating their drinking water using a variety of methods (boiling, straining through clothes and, adding WaterGuard). Delivery of treated and piped water to the populations in low-income countries is one of the essential United Nations Sustainable Development Goals [38]. Nevertheless, use of POU water treatments is the interim solution for people who obtain water from unimproved sources until the goal is achieved. The present study evaluated the effectiveness of household water treatment in reducing diarrhea among children under 5 years of age in rural Dire Dawa using a community-based cluster randomized controlled trial. Children in households using chlorination for their stored drinking water experienced fewer diarrheal episodes than did children in households using usual practices of water collection and storage. Point-of-use water treatment, specifically chlorination of drinking water, resulted in significantly lower (36.0%) incidence of diarrhea among children under the age of 5 years compared with children who were not given the intervention (adjusted IRR = 0.64, 95% CI: 0.57–0.73). This result was obtained even though the children in the intervention households were living in a highly vulnerable environment where 84.0% of households had no refuse disposal facility, 96.0% had no handwashing facility, 85.0% had no soap for washing hands, 41.0% had no latrine, 74.0% of the fathers were subsistence farmers, and 46.0% of the mothers and 30.0% of the fathers had no formal education. In our study, 80.0% of households stored their treated drinking water in narrow-mouthed containers. Other studies reported that water in narrow-mouthed containers was less likely to be contaminated than water stored in wide-mouthed containers [39,40,41]. This is primarily due to the use of bowls to take water from wide mouth-containers. Diarrheal disease interventions that involve treating drinking water should therefore include the use of narrow-mouthed containers. In our study, considerable improvement in microbial quality of drinking water was observed in the intervention households. This is in agreement with results from an analogous trial in Kersa District, eastern Ethiopia [18] and a review by Clasen and colleagues [8]. Together, these studies suggest that consistent disinfection of drinking water by chlorination prevents the water from being contaminated. In the present study a 36.0% lower incidence of diarrhea among children under the age of 5 years who received the intervention corroborates results of trials conducted in Kenya that reported a 34.0% reduction in diarrhea [42] and Guatemala that showed a 39.0% reduction associated with water treatment [43]. On the other hand, this rate was lower than those in similar studies conducted in Bolivia (44.0%) [14], Zambia (48.0%) [15], Liberia (90.0%) [20], Pakistan (55.0%) [16], Haiti (59.0%) [17], Kersa District in Ethiopia (58.0%) [18] and Bolivia (79.0%) [19]. The difference might be explained by the fact that in our study area some of the diarrheal cases might be caused by the presence of chlorine-resistant parasitic protozoa such as oocysts of the Cryptosporidium species and cysts of Giardia lamblia. The 11.0, 17.0 and 23.0% lower incidences of diarrhea attained in Ghana [11], Kenya [44] and Bangladesh [12], respectively, were considerably lower than that of our study (36.0%). This difference may be due to variations in study participants' acquiescence with the intervention because, the effectiveness of household water treatment interventions at the community level may be limited by inadequate adherence [45]. Furthermore, the effectiveness of the intervention in our study was greater than in studies by Jensen et al. (2003), Colford et al. (2005), Jain et al. (2010), and Boisson et al. (2013). These results may be due in part to our monitoring of participants' compliance with the intervention using the DPD colorimetric test on unannounced days once every 2 weeks, giving a measured compliance of 81.3%. Our finding of compliance was consistent with results from trials in Zambia (80.5%) [15] and in Kersa District, eastern Ethiopia (79.9%) [18]. Compliance in our study was higher than in trials in Guatemala (35.0%) [13] and in Haiti (56.0%) [17], but the 85.0% compliance achieved in Liberia [20] was greater than in our study. Intermittent use of the water treatment product due to the odor and taste of sodium hypochlorite may be one reason for these variations. In our study, the lower incidence of diarrhea (compared with non-intervention) attained in infants (42.7%) was greater than in children one to two years (36.8%) and three to four years (30.4%); this result is in agreement with a study carried out in Bolivia [14]. In most households in the study area, mothers usually give more attention to younger children than their older siblings likely ensuring greater use of the intervention with the younger children. Additionally, in Ethiopia most mothers boil the water they give their infants and store it in a sealed container, a practice that may reduce diarrhea transmission further in younger children [46]. Hence, the synergistic effect of chlorination, boiling, and giving greater care to young children may account for the lower incidence of diarrhea in infants than their older siblings. The lowest reduction in incidence among children 3 to 4 years of age suggests they might have been exposed to pathogens through means of transmission other than contaminated drinking water, such as the fecal-oral route. Furthermore, children at this age actively move and play on the ground, increasing their chances of acquiring infections. Therefore, in order to reduce the occurrence of diarrhea in this age group, further intervention studies focusing on these aspects of sanitation are needed. Among all water quality interventions, household-based chlorination is the most cost-effective [47]. In our study area, ready-made sodium hypochlorite can be purchased for USD 0.46 (15 Ethiopian Birr) per 150 ml bottle from drug vendors. This amount is enough for a rural family for approximately 1 month. Therefore, promoting regular use of the disinfectant is not only highly beneficial for the rural population, but also an affordable way to keep their children healthy. Accordingly, further research is needed to identify whether intervention households maintaining good water handling and storage practices after completion of similar projects. There were four limitations in this study. First, we were unable to employ blinding due to the odor and taste of sodium hypochlorite. Second, we could not collect information on diarrhea on a seven-day basis. Recall bias may have occurred because information about the frequency and duration of diarrhea was collected once every 2 weeks. However, we tried to minimize the occurrence of recall bias by giving proper training to the data collectors. Third, the water treatment product (sodium hypochlorite) was provided to the intervention households free of charge; as a result, courtesy bias and the Hawthorne effect (observer effect) may have increased the effect size of the intervention. However, we tried to minimize the chances of inflated effect size by using independent intervention providers to provide the bottles of WaterGuard. Thus, the data collectors collected the data on episodes of diarrhea once every 2 weeks and had nothing to do with provision of the intervention material (WaterGuard). Fourth, some under-five children might have disliked the odor and taste of the chlorinated water and used untrusted sources, such as from neighboring households, a practice we could not monitor. In conclusion, in rural areas in Dire Dawa, water chlorination at the household level using liquid bleach (1.2% sodium hypochlorite) considerably decreased the incidence of diarrhea among children under the age of 5 years. Therefore, chlorinating drinking water at the household level may be a valuable interim solution to the problem of high rates of diarrheal disease until potable water is made accessible to the majority of the populations in Dire Dawa Administration and other Ethiopian communities. We also recommend similar interventions at the community level with the intent of assessing acceptance, expediency, and efficiency of household water treatment with chlorine solution. The datasets generated and analyzed for this study will not be available publicly due to data protection law. DBS: Disinfection-by-product DPD: Diethyl para-Phenylene Diamine GEE: Generalized Estimating Equations HAA: Haloacetic acids HWTS: Household Water Treatment and Safe Storage IA: Intervention Arm IRR: Incidence rate ratio NRERC: National Research Ethics Review Committee PACTR: Pan African Clinical Trial Registry POU: Point-of-use PWO: Person Week Observation THM: Trihalomethane GBD 2017 Diarrhoeal Disease Collaborators. Quantifying risks and interventions that have affected the burden of diarrhoea among children younger than 5 years: an analysis of the Global Burden of Disease Study 2017. Lancet Infect Dis. 2020;20(1):37–59. Gedamu G, Kumie A, Haftu D. Magnitude and associated factors of diarrhea among under five children in Farta wereda, North West Ethiopia. Qual Prim Care. 2017;25(4):199–207. Alebel A, Tesema C, Temesgen B, Gebrie A, Petrucka P, Kibret GD. Prevalence and determinants of diarrhea among under-five children in Ethiopia: a systematic review and meta-analysis. PLoS One. 2018;13(6):e0199684. https://doi.org/10.1371/journal.pone.0199684. Prüss-Ustün A, Bartram J, Clasen T, Colford JM Jr, Cumming O, Curtis V, et al. Burden of disease from inadequate water, sanitation and hygiene in low-and middle-income settings: a retrospective analysis of data from 145 countries. Tropical Med Int Health. 2014;19(8):894–905. Howard G, Ince ME, Schmoll O, Smith M. Drinking-water quality and health: rapid asssessment of drinking-water quality: a handbook for implementation. Geneva: World Health Organization; 2012. WHO. Unsafe water, inadequate sanitation and hygiene: Combating waterborne disease at the household level, vol. 7. Geneva: World Health Organization; 2007. UNICEF. Why is household water treatment and safe storage an important intervention for preventing disease? Promotion of household water treatment and safe storage in UNICEF WASH programmes. Washington: UNICEF; 2008. p. 1. Clasen T, Schmidt WP, Rabie T, Roberts I, Cairncross S. Interventions to improve water quality for preventing diarrhoea: systematic review and meta-analysis. BMJ. 2007;334(7597):782. UNICEF. Effectiveness of WASH interventions in reducing diarrhoea morbidity: Evidence base: water, sanitation and hygiene interventions. New York: UNICEF; 2009. p. 1–2. Sampson S. Chlorine poisining. https://www.healthline.com/health/chlorine-poisoning Accessed 17 Mar 2020. 2017.. Cha S, Kang D, Tuffuor B, Lee G, Cho J, Chung J, et al. The effect of improved water supply on diarrhea prevalence of children under five in the Volta region of Ghana: a cluster-randomized controlled trial. Int J Environ Res Public Health. 2015;12(10):12127–43. Pickering AJ, Crider Y, Sultana S, Swarthout J, Goddard FG, Islam SA, et al. Effect of in-line drinking water chlorination at the point of collection on child diarrhoea in urban Bangladesh: a double-blind, cluster-randomised controlled trial. Lancet Glob Health. 2019;7(9):e1247–56. Reller ME, Mendoza CE, Lopez MB, Alvarez M, Hoekstra RM, Olson CA, et al. A randomized controlled trial of household-based flocculant-disinfectant drinking water treatment for diarrhea prevention in rural Guatemala. Am J Trop Med Hyg. 2003;69(4):411–9. Quick RE, Venczel LV, Mintz ED, Soleto L, Aparicio J, Gironaz M, et al. Diarrhoea prevention in Bolivia through point-of-use water treatment and safe storage: a promising new strategy. Epidemiol Infect. 1999;122(1):83–90. Quick RE, Kimura A, Thevos A, Tembo M, Shamputa I, Hutwagner L, et al. Diarrhea prevention through household-level water disinfection and safe storage in Zambia. Am J Trop Med Hyg. 2002;66(5):584–9. Luby SP, Agboatwalla M, Painter J, Altaf A, Billhimer W, Keswick B, et al. Combining drinking water treatment and hand washing for diarrhoea prevention, a cluster randomised controlled trial. Tropical Med Int Health. 2006;11(4):479–89. Harshfield E, Lantagne D, Turbes A, Null C. Evaluating the sustained health impact of household chlorination of drinking water in rural Haiti. Am J Trop Med Hyg. 2012;87(5):786–95. Mengistie B, Berhane Y, Worku A. Household water chlorination reduces incidence of diarrhea among under-five children in rural Ethiopia: a cluster randomized controlled trial. PLoS One. 2013;8(10):e77887. https://doi.org/10.1371/journal.pone.0077887. Lindquist ED, George CM, Perin J, Neiswender de Calani KJ, Norman WR, Davis TP, et al. A cluster randomized controlled trial to reduce childhood diarrhea using hollow fiber water filter and/or hygiene–sanitation educational interventions. Am J Trop Med Hyg. 2014;91(1):190–7. Doocy S, Burnham G. Point-of-use water treatment and diarrhoea reduction in the emergency context: an effectiveness trial in Liberia. Tropical Med Int Health. 2006;11(10):1542–52. Jensen PK, Ensink JH, Jayasinghe G, van der Hoek W, Cairncross S, Dalsgaard A. Effect of chlorination of drinking-water on water quality and childhood diarrhoea in a village in Pakistan. J Health Popul Nutr. 2003;21(1):26–31. Colford JM Jr, Wade TJ, Sandhu SK, Wright CC, Lee S, Shaw S, et al. A randomized, controlled trial of in-home drinking water intervention to reduce gastrointestinal illness. Am J Epidemiol. 2005;161(5):472–82. Jain S, Sahanoon OK, Blanton E, Schmitz A, Wannemuehler KA, Hoekstra RM, et al. Sodium dichloroisocyanurate tablets for routine treatment of household drinking water in periurban Ghana: a randomized controlled trial. Am J Trop Med Hyg. 2010;82(1):16–22. Boisson S, Stevenson M, Shapiro L, Kumar V, Singh LP, Ward D, et al. Effect of household-based drinking water chlorination on diarrhoea among children under five in Orissa, India: a double-blind randomised placebo-controlled trial. PLoS Med. 2013;10(8):e1001497. https://doi.org/10.1371/journal.pmed.1001497. Luby SP. Quality of drinking water. BMJ. 2007;334(7597):755–6. Central Statistical Agency. Population Projections for Ethiopia 2007–2037. www.csa.gov.et Accessed 25 Mar 2020. Torgerson DJ. Contamination in trials: is cluster randomisation the answer? BMJ. 2001;322(7282):355–7. Hashi A, Kumie A, Gasana J. Hand washing with soap and WASH educational intervention reduces under-five childhood diarrhoea incidence in Jigjiga District, eastern Ethiopia: a community-based cluster randomized controlled trial. Prev Med Rep. 2017;6:361–8. Hayes RJ, Bennett S. Simple sample size calculation for cluster randomized trials. Int J Epidemiol. 1999;28:319–26. CDC. How to make water safe using WaterGuard. https://www.cdc.gov/healthywater/pdf/.../water_treatment_waterguard_seasia_508.pdf. Accessed 14 Sept 2017. White GC. Current chlorination and dechlorination practices in the treatment of potable water, wastewater, and cooling water. In: In Proceedings of the Conference on the Environmental Impact of Water Chlorination; 1975. Schmidt WP, Arnold BF, Boisson S, Genser B, Luby SP, Barreto ML, et al. Epidemiological methods in diarrhoea studies—an update. Int J Epidemiol. 2011;40(6):1678–92. World Health Organization. Diarrhoeal disease fact sheet. Accessed 16 Dec 2019. Geneva: WHO library; 2017. Morris SS, Cousens SN, Lanata CF, Kirkwood BR. Diarrhoea—defining the episode. Int J Epidemiol. 1994;23(3):617–23. Wright J, Gundry S, Conroy R. Household drinking water in developing countries: a systematic review of microbiological contamination between source and point-of-use. Tropical Med Int Health. 2004;9(1):106–17. Bartram J, Pedley S. Microbiological analyses. In: Bartram J, Ballance R, editors. Water quality monitoring–a practical guide to the design and implementation of freshwater quality studies and monitoring Programmes. Finland: UNEP/WHO; CRC Press; 1996. p. 221–47. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73(1):13–22. World Health Organization. World health statistics 2016: monitoring health for the SDGs sustainable development goals. Geneva: World Health Organization; 2016. Han AM, Oo KN, Midorikawa Y, Shwe S. Contamination of drinking water during collection and storage. Trop Geogr Med. 1989;41(2):138–40. Deb BC, Sircar BK, Sengupta PG, De SP, Mondal SK, Gupta DN, et al. Studies on interventions to prevent eltor cholera transmission in urban slums. Bull World Health Organ. 1986;64(1):127. Pinfold J. Faecal contamination of water and fingertip-rinses as a method for evaluating the effect of low-cost water supply and sanitation activities on faeco-oral disease transmission. I. a case study in rural north-East Thailand. Epidemiol Infect. 1990;105(2):363–75. Conroy RM, Elmore-Meegan M, Joyce T, McGuigan KG, Barnes J. Solar disinfection of drinking water and diarrhoea in Maasai children: a controlled field trial. Lancet. 1996;348(9043):1695–7. Chiller TM, Mendoza CE, Lopez MB, Alvarez M, Hoekstra RM, Keswick BH, et al. Reducing diarrhoea in Guatemalan children: randomized controlled trial of flocculant-disinfectant for drinking-water. Bull World Health Organ. 2006;84(1):28–35. Crump JA, Otieno PO, Slutsker L, Keswick BH, Rosen DH, Hoekstra RM, et al. Household based treatment of drinking water with flocculant-disinfectant for preventing diarrhoea in areas with turbid source water in rural western Kenya: cluster randomised controlled trial. BMJ. 2005;331(7515):478. Enger KS, Nelson KL, Rose JB, Eisenberg JN. The joint effects of efficacy and compliance: a study of household water treatment effectiveness against childhood diarrhea. Water Res. 2013;47(3):1181–90. Central Statistical Agency (CSA) [Ethiopia] and ICF International. Ethiopia Demographic and Health Survey 2016. Addis Ababa, Ethiopia, and Rockville, Maryland: CSA and ICF; 2016. World Health Organization. Water quality interventions to prevent diarrhoea: cost and cost-effectiveness. (No. WHO/HSE/WSH/08.02). Geneva: World Health Organization; 2008. We are grateful to the Ethiopian Institute of Water Resources of Addis Ababa University, Dire Dawa Regional Health Bureau, and Dire Dawa Water Supply and Sanitation Authority for their financial and material support. We also want to thank the study participants, data collectors, intervention providers, supervisors, and the staff of Wahil Health Center, Biyo Awale Health Center and Bishan Behe Health Post. National Research Ethics Review Committee is acknowledged for reviewing and approving the study protocol. Ann Byers kindly assisted with editing the manuscript at short notice. The principal investigator Ephrem Tefera Solomon, a PhD student in Water and Public Health at Addis Ababa University, Ethiopian Institute of Water Resources, was funded by the University with ID.No. GSR/0105/08. Hence, the authors declare that no funding was received for this study from external sources. Ethiopian Institute of Water Resources, Addis Ababa University, Addis Ababa, Ethiopia Ephrem Tefera Solomon & Sirak Robele Haramaya University, College of Health and Medical Sciences, Harar, Ethiopia Ephrem Tefera Solomon & Bezatu Mengistie University of California, San Francisco Medical Center, San Francisco, CA, USA Helmut Kloos Ephrem Tefera Solomon Sirak Robele Bezatu Mengistie ETS SR BM conceived the study, drafted the proposal, monitored data collection, and coordinated field work. ETS SR BM carried out data analysis and interpretation of the findings and wrote the manuscript. HK edited the manuscript. All authors read and approved the final manuscript. Correspondence to Ephrem Tefera Solomon. This protocol was reviewed and approved by National Research Ethics Review Committee of Ethiopia in Addis Ababa. Letters were written to Dire Dawa Regional Health Bureau, Wahil Health Center and Biyo Awale Health Center, to get their permission and for their possible cooperation by the Ethiopian Institute of Water Resources. Consent was obtained from Dire Dawa Regional Health Bureau and the respective rural community leaders. Information about the study and its objectives was provided to mothers and caregivers and their written consent was obtained. Data gathered from the study participants was used only for the purpose of this study and results were kept confidential. Data collectors advised mothers and caregivers to seek care for diarrhea of their children at nearby health posts or health centers. At the end of the study, households in the control arm were provided with bottles of WaterGuard in order to avoid bias. The authors declare that they have no competing interest. Solomon, E.T., Robele, S., Kloos, H. et al. Effect of household water treatment with chlorine on diarrhea among children under the age of five years in rural areas of Dire Dawa, eastern Ethiopia: a cluster randomized controlled trial. Infect Dis Poverty 9, 64 (2020). https://doi.org/10.1186/s40249-020-00680-9 Water treatment with chlorine Under-five children WaterGuard Cluster randomized controlled trial
CommonCrawl
Counting and enumeration Président: Joe Sawada (University of Guelph) KAREL KLOUDA, Czech Technical University in Prague Synchronizing delay of (epi)Sturmian morphisms [PDF] The synchronizing delay (SD) of a morphism is a constant strongly related to essential properties of the language generated by the morphism: morphisms with finite SD (i.e. circular/recognizable morphisms) are known not to contain arbitrarily long repetitions and to have a regular structure of bispecial factors determining the language complexity. An algorithm deciding finiteness of the SD is known. However, there is no known universal upper bound on the value of SD, and its actual value has been computed only for most simple classes of morphisms. We have found an effective upper bound for the class of Sturmian morphisms (attained by standard Sturmian morphisms) and also an upper bound for a more general class of episturmian morphisms: The SD of a primitive episturmian morphism $\varphi$ over an alphabet $\mathcal{A}$ is less than $$ \frac{1}{\#\mathcal{A} -1} \left( \sum_{a\, \in \mathcal{A}} |\varphi(a)| - 1 \right) + \max_{a\, \in \mathcal{A}}|\varphi(a)| -3\,. $$ SAMUEL SIMON, Simon Fraser University The asymptotics of reflectable weighted walks in arbitrary dimension [PDF] A walk on the square lattice is a sequence of steps from a given step set, and the length of the walks is the size of the sequence. The enumeration and asymptotics of walks has been of interest, and much progress has been made within the past few decades. We look at a particular weighted family of walks confined to the positive orthant in $d$ dimensions. We introduce the standard techniques used in manipulating functional equations to extract the desired terms from generating functions. Then we combine results from analytic combinatorics and complex analysis to find asymptotics. FOSTER TOM, University of California, Berkeley Classifying the near-equality of ribbon Schur functions [PDF] We consider the problem of determining when the difference of two ribbon Schur functions is a single Schur function. We prove that this near-equality phenomenon occurs in sixteen infinite families and we conjecture that these are the only possible cases. Towards this converse, we prove that under certain additional assumptions the only instances of near-equality are among our sixteen families. In particular, we prove that our first ten families are a complete classification of all cases where the difference of two ribbon Schur functions is a single Schur function whose corresponding partition has at most two parts at least 2. We then provide a framework for interpreting the remaining six families and we explore some ideas towards resolving our conjecture in general. We also determine some necessary conditions for the difference of two ribbon Schur functions to be Schur-positive. AMÉLIE TROTIGNON, Simon Fraser University & Institut Denis Poisson, Université de Tours Walks avoiding a quadrant [PDF] Enumeration of planar lattice walks in cones has many applications in combinatorics and probability theory. The objects are amenable to treatment by many techniques: combinatorics, complex analysis, probability theory, computer algebra, and Galois theory of difference equations. Walks restricted to the first quadrant are well studied but the case of three quadrants has been approached only recently. In this talk, we generalize the analytic method of walks in a quarter plane to walks in the three quarter plane. This method is composed of three main steps: write a functional equation that the generating function of walks satisfies, transform it to a boundary value problem and solve this problem. The result is a contour-integral expression for the generating function. The advantage of this method is to provide a uniform treatment for the study of walks.
CommonCrawl
Recent questions tagged gate2004 Consider a parity check code with three data bits and four parity check bits. Three of the code words are 0101011, 1001101 and 1110001. Which of the following are also code words? 0010111 0110110 1011010 0111010 plz give the solution in detail I and III I, II and III II and IV I, II, III and IV asked Apr 5, 2017 in Computer Networks by Prince Kumar 1 (389 points) | 179 views Consider three IP networks $A, B$ and $C$. Host $H_A$ in network $A$ sends messages each containing $180$ $bytes$ of application data to a host $H_C$ in network $C$. The TCP layer prefixes $20$ byte header to the message. This passes through an intermediate network $B$. The ... other overheads. $325.5$ $\text{Kbps}$ $354.5$ $\text{Kbps}$ $409.6$ $\text{Kbps}$ $512.0$ $\text{Kbps}$ asked Apr 24, 2016 in Computer Networks by jothee Veteran (105k points) | 6k views computer-networks Consider the following program segment for a hypothetical CPU having three user registers $R_1, R_2$ and $R_3.$ \begin{array}{|l|l|c|} \hline \text {Instruction} & \text{Operation }& \text{Instruction size (in Words)} \\\hline \text{MOV $R_1,5000$} & ... clock cycles }\\\hline \end{array} The total number of clock cycles required to execute the program is $29$ $24$ $23$ $20$ asked Apr 24, 2016 in CO and Architecture by jothee Veteran (105k points) | 6.4k views machine-instructions Consider the grammar rule $E \rightarrow E1 - E2$ for arith­metic expressions. The code generated is targeted to a CPU having a single user register. The sub­traction operation requires the first operand to be in the register. If $E1$ and $E2$ do not have any ... first Evaluation of $E1$ and $E2$ should necessarily be interleaved Order of evaluation of $E1$ and $E2$ is of no consequence asked Nov 12, 2014 in Compiler Design by Vikrant Singh Boss (13.6k points) | 3.6k views compiler-design target-code-generation Choose the best matching between the programming styles in Group 1 and their characteristics in Group 2. ... $P-3\quad Q-4 \quad R-1\quad S-2$ $P-3\quad Q-4\quad R-2\quad S-1$ asked Sep 19, 2014 in Programming by Kathleen Veteran (52.2k points) | 1.6k views programming-paradigms $L_1$ is a recursively enumerable language over $\Sigma$. An algorithm $A$ effectively enumerates its words as $\omega_1, \omega_2, \omega_3, \dots .$ Define another language $L_2$ over $\Sigma \cup \left\{\text{#}\right\}$ ... are true $S_1$ is true but $S_2$ is not necessarily true $S_2$ is true but $S_1$ is not necessarily true Neither is necessarily true asked Sep 19, 2014 in Theory of Computation by Kathleen Veteran (52.2k points) | 4.2k views theory-of-computation turing-machine Consider the following grammar G: $S \rightarrow bS \mid aA \mid b$ $A \rightarrow bA \mid aB$ $B \rightarrow bB \mid aS \mid a$ Let $N_a(w)$ and $N_b(w)$ denote the number of a's and b's in a string $\omega$ respectively. The language $L(G)$ over $\left\{a, b\right\}^+$ generated ... $\left\{w \mid N_b(w) = 3k, k \in \left\{0, 1, 2, \right\}\right\}$ asked Sep 19, 2014 in Compiler Design by Kathleen Veteran (52.2k points) | 1.7k views The language $\left\{a^mb^nc^{m+n} \mid m, n \geq1\right\}$ is regular context-free but not regular context-sensitive but not context free type-0 but not context sensitive identify-class-language The following finite state machine accepts all those binary strings in which the number of $1$'s and $0$'s are respectively: divisible by $3$ and $2$ odd and even even and odd divisible by $2$ and $3$ finite-automata A program takes as input a balanced binary search tree with $n$ leaf nodes and computes the value of a function $g(x)$ for each node $x$. If the cost of computing $g(x)$ is: $\Large \min \left ( \substack{\text{number of leaf-nodes}\\\text{in left-subtree of $ ... worst-case time complexity of the program is? $\Theta (n)$ $\Theta (n \log n)$ $\Theta(n^2)$ $\Theta (n^2\log n)$ asked Sep 19, 2014 in DS by Kathleen Veteran (52.2k points) | 9.1k views binary-search-tree The recurrence equation $ T(1) = 1$ $T(n) = 2T(n-1) + n, n \geq 2$ evaluates to $2^{n+1} - n - 2$ $2^n - n$ $2^{n+1} - 2n - 2$ $2^n + n $ asked Sep 19, 2014 in Algorithms by Kathleen Veteran (52.2k points) | 5.3k views GATE2004-83, ISRO2015-40 The time complexity of the following C function is (assume $n > 0$) int recursive (int n) { if(n == 1) return (1); else return (recursive (n-1) + recursive (n-1)); } $O(n)$ $O(n \log n)$ $O(n^2)$ $O(2^n)$ Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$ is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$ cannot have a cut vertex must have a cycle must have a cut-edge (bridge) has chromatic number strictly greater than those of $G_1$ and $G_2$ A point is randomly selected with uniform probability in the $X-Y$ plane within the rectangle with corners at $(0,0), (1,0), (1,2)$ and $(0,2).$ If $p$ is the length of the position vector of the point, the expected value of $p^{2}$ is $\left(\dfrac{2}{3}\right)$ $\quad 1$ $\left(\dfrac{4}{3}\right)$ $\left(\dfrac{5}{3}\right)$ asked Sep 19, 2014 in Probability by Kathleen Veteran (52.2k points) | 2.9k views uniform-distribution How many graphs on $n$ labeled vertices exist which have at least $\frac{(n^2 - 3n)}{ 2}$ edges ? $^{\left(\frac{n^2-n}{2}\right)}C_{\left(\frac{n^2-3n} {2}\right)}$ $^{{\large\sum\limits_{k=0}^{\left (\frac{n^2-3n}{2} \right )}}.\left(n^2-n\right)}C_k\\$ $^{\left(\frac{n^2-n}{2}\right)}C_n\\$ $^{{\large\sum\limits_{k=0}^n}.\left(\frac{n^2-n}{2}\right)}C_k$ asked Sep 19, 2014 in Graph Theory by Kathleen Veteran (52.2k points) | 4.8k views graph-theory permutation-and-combination Two $n$ bit binary strings, $S_1$ and $S_2$ are chosen randomly with uniform probability. The probability that the Hamming distance between these strings (the number of bit positions where the two strings differ) is equal to $d$ is $\dfrac{^{n}C_{d}}{2^{n}}$ $\dfrac{^{n}C_{d}}{2^{d}}$ $\dfrac{d}{2^{n}}$ $\dfrac{1}{2^{d}}$ The minimum number of colours required to colour the following graph, such that no two adjacent vertices are assigned the same color, is $2$ $3$ $4$ $5$ graph-coloring In an $M \times N$ matrix all non-zero entries are covered in $a$ rows and $b$ columns. Then the maximum number of non-zero entries, such that no two are on the same row or column, is $\leq a +b$ $\leq \max(a, b)$ $\leq \min(M-a, N-b)$ $\leq \min(a, b)$ asked Sep 19, 2014 in Linear Algebra by Kathleen Veteran (52.2k points) | 3.1k views linear-algebra Mala has the colouring book in which each English letter is drawn two times. She wants to paint each of these $52$ prints with one of $k$ colours, such that the colour pairs used to colour any two letters are different. Both prints of a letter can also be coloured with the same colour. What is the minimum value of $k$ that satisfies this requirement? $9$ $8$ $7$ $6$ asked Sep 19, 2014 in Combinatory by Kathleen Veteran (52.2k points) | 4.2k views An examination paper has $150$ multiple choice questions of one mark each, with each question having four choices. Each incorrect answer fetches $-0.25$ marks. Suppose $1000$ students choose all their answers randomly with uniform probability. The sum total of the expected marks obtained by all these students is $0$ $2550$ $7525$ $9375$ The inclusion of which of the following sets into $S = \left\{ \left\{1, 2\right\}, \left\{1, 2, 3\right\}, \left\{1, 3, 5\right\}, \left\{1, 2, 4\right\}, \left\{1, 2, 3, 4, 5\right\} \right\} $ is necessary and sufficient to make $S$ a complete lattice under the partial order defined by set containment ... $\{1\}, \{1, 3\}$ $\{1\}, \{1, 3\}, \{1, 2, 3, 4\}, \{1, 2, 3, 5\}$ asked Sep 19, 2014 in Set Theory & Algebra by Kathleen Veteran (52.2k points) | 3.1k views set-theory&algebra partial-order The following is the incomplete operation table of a $4-$ ... The last row of the table is $c\;a\;e\; b$ $c\; b\; a\; e$ $c\; b\; e\; a$ $c\; e\; a\; b$ group-theory How many solutions does the following system of linear equations have? $-x + 5y = -1$ $x - y = 2$ $x + 3y = 3$ infinitely many two distinct solutions unique none system-of-equations The following propositional statement is $\left(P \implies \left(Q \vee R\right)\right) \implies \left(\left(P \wedge Q \right)\implies R\right)$ satisfiable but not valid valid a contradiction None of the above asked Sep 19, 2014 in Mathematical Logic by Kathleen Veteran (52.2k points) | 1.8k views mathematical-logic propositional-logic A 4-stage pipeline has the stage delays as $150$, $120$, $160$ and $140$ $nanoseconds$, respectively. Registers that are used between the stages have a delay of $5$ $nanoseconds$ each. Assuming constant clocking rate, the total time taken to process $1000$ data ... will be: $\text{120.4 microseconds}$ $\text{160.5 microseconds}$ $\text{165.5 microseconds}$ $\text{590.0 microseconds}$ asked Sep 19, 2014 in CO and Architecture by Kathleen Veteran (52.2k points) | 4.4k views A hard disk with a transfer rate of $10$ Mbytes/second is constantly transferring data to memory using DMA. The processor runs at $600$ MHz, and takes $300$ and $900$ clock cycles to initiate and complete DMA transfer respectively. If the size of the transfer is $20$ Kbytes, what is the percentage of processor time consumed for the transfer operation? $5.0 \%$ $1.0\%$ $0.5\%$ $0.1\%$ The microinstructions stored in the control memory of a processor have a width of $26$ bits. Each microinstruction is divided into three fields: a micro-operation field of $13$ bits, a next address field $(X),$ and a MUX select field $(Y).$ There are $8$ status bits in the input of ... size of the control memory in number of words? $10, 3, 1024$ $8, 5, 256$ $5, 8, 2048$ $10, 3, 512$ microprogramming Let $A = 1111 1010$ and $B = 0000 1010$ be two $8-bit$ $2's$ complement numbers. Their product in $2's$ complement is $1100 0100$ $1001 1100$ $1010 0101$ $1101 0101$ Consider a small two-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently used (LRU) scheme. The number of cache misses for the following sequence of block addresses is: $8, 12, 0, 12, 8$. $2$ $3$ $4$ $5$ cache-memory
CommonCrawl
Murray, P. W., Gimzewski, J. K., Schlittler, R. R. & Thornton, G. Templating a face-centered cubic (110) termination of C< sub> 60. Surface science 367, L79–L84 (1996). Murray, P. W. et al. Ultimate Limits of Fabrication and Measurement 189–196 (Springer Netherlands, 1995). Joachim, C. & Gimzewski, J. K. Analysis of low-voltage I (V) characteristics of a single C60 molecule. EPL (Europhysics Letters) 30, 409 (1995). Berndt, R. et al. Atomic resolution in photon emission induced by a scanning tunneling microscope. Physical review letters 74, 102 (1995). Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic transparence of a single C 60 molecule. Physical review letters 74, 2102 (1995). Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995). , et al. Erratum: A femtojoule calorimeter using micromechanical sensors [Rev. Sci. Instrum. 65, 3793 (1994)]. Review of Scientific Instruments 66, 3083–3083 (1995). Murray, P. W. et al. l-> 3 DIMENSIONAL STRUCTURES ON A UNI-DIRECTION AL SUBSTRATE. (1995). Meyer, E., Gimzewski, J. K., Gerber, C. & Schlittler, R. R. Ultimate Limits of Fabrication and Measurement 89–95 (Springer Netherlands, 1995). Gimzewski, J. K., Gerber, C., Meyer, E. & Schlittler, R. R. Forces in Scanning Probe Methods 123–131 (Springer Netherlands, 1995). Jung, T., Schlittler, R., Gimzewski, J. K. & Himpsel, F. J. One-dimensional metal structures at decorated steps. Applied Physics A 61, 467–474 (1995). Welland, M. E. & Gimzewski, J. K. Perspectives on the limits of fabrication and measurement. PHILOS T ROY SOC A 353, 279–279 (1995). Gimzewski, J. K. Photons and Local Probes 189–208 (Springer Netherlands, 1995). Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: interpretation of photon maps of metallic systems. SPIE MILESTONE SERIES MS 107, 376–376 (1995). Gimzewski, J. K. & Humbert, A. Scanning tunneling microscopy of surface microstructure on rough surfaces. SPIE MILESTONE SERIES MS 107, 249–249 (1995). Modesti, S., Gimzewski, J. K. & Schlittler, R. R. Stable and metastable reconstructions at the C< sub> 60/Au (110) interface. Surface science 331, 1129–1135 (1995). Welland, M. E. & Gimzewski, J. K. Ultimate limits of fabrication and measurement. (Kluwer Academic, 1995). Gimzewski, J. K. & Welland, M. E. Ultimate Limits of Fabrication and Measurements. NATO ASI Series 292, (1995). Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Bias-dependent STM images of oxygen-induced structures on Ti (0001) facets. Surface science 310, 85–88 (1994). Joachim, C. & Gimzewski, J. K. CONTACTING A SINGLE C60 MOLECULE. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994). Gimzewski, J. K., Modesti, S. & Schlittler, R. R. Cooperative self-assembly of Au atoms and C 60 on Au (110) surfaces. Physical review letters 72, 1036 (1994). David, T., Gimzewski, J. K., Purdie, D., Reihl, B. & Schlittler, R. R. Epitaxial growth of C 60 on Ag (110) studied by scanning tunneling microscopy and tunneling spectroscopy. Physical Review B 50, 5810 (1994). , et al. A femtojoule calorimeter using micromechanical sensors. Review of Scientific Instruments 65, 3793–3798 (1994). Gaisch, R. et al. Internal structure of C60 on Au (110) as observed by low-temperature scanning tunneling microscopy. Journal of Vacuum Science & Technology B 12, 2153–2155 (1994). Reihl, B. et al. Low-temperature scanning tunneling microscopy. Physica B: Condensed Matter 197, 64–71 (1994). Gimzewski, D., Parrinello, P., Reihl, D. & ,. Molecular recording/reproducing method and recording medium. (1994). Dumas, P. et al. Nanostructuring of porous silicon using scanning tunneling microscopy. Journal of Vacuum Science & Technology B 12, 2067–2069 (1994). Gimzewski, J. K., Gerber, C., Meyer, E. & Schlittler, R. R. Observation of a chemical reaction using a micromechanical sensor. Chemical Physics Letters 217, 589–594 (1994). Berndt, R. & Gimzewski, J. K. Photon Emission from C60 in a Nanoscopic Cavity. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994). Dumas, P. et al. Photon spectroscopy, mapping, and topography of 85% porous silicon. Journal of Vacuum Science & Technology B 12, 2064–2066 (1994). Dumaks, P. et al. Photon Spectroscopy, Mapping, and Topography of 85-Percent Porous Silicon. Journal of Vacuum Science & Technology B 12, 2064–2066 (1994). Galaxy, I. Photothermal spectroscopy with femtojoule sensitivity using a micromechanical device. Nature 372, 3 (1994). Gimzewski, J. K., Modesti, S., David, T. & Schlittler, R. R. Scanning tunneling microscopy of ordered C60 and C70 layers on Au (111), Cu (111), Ag (110), and Au (110) surfaces. Journal of Vacuum Science & Technology B 12, 1942–1946 (1994). Modesti, S., Gimzewski, J. K. & Schlittler, R. R. Stable and metastable reconstructions at the C sub 60/Au (110) interface. Surf. Sci.(The Netherlands) 331, 1129–1135 (1994). Berndt, R. & Gimzewski, J. K. Atomic and Nanometer-Scale Modification of Materials: Fundamentals and Applications 327–335 (Springer Netherlands, 1993). Dumas, P. et al. Direct observation of individual nanometer-sized light-emitting structures on porous silicon surfaces. EPL (Europhysics Letters) 23, 197 (1993). Berndt, R., Gimzewski, J. K. & Johansson, P. Electromagnetic interactions of metallic objects in nanometer proximity. Physical review letters 71, 3493 (1993). Gaisch, R. et al. Internal structure of C60 fullerence molecules as revealed by low-temperature STM. Applied Physics A 57, 207–210 (1993). Berndt, R. & Gimzewski, J. K. Isochromat spectroscopy of photons emitted from metal surfaces in an STM. Annalen Der Physik 505, 133–140 (1993). Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Nanosources and Manipulation of Atoms Under High Fields and Temperatures: Applications 219–228 (Springer Netherlands, 1993). Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Local Experiments Using Nanofabricated Structures in STM. NATO ASI SERIES E APPLIED SCIENCES 235, 219–219 (1993). Gimzewski, J. K., Modesti, S., Gerber, C. & Schlittler, R. R. Observation of a new Au (111) reconstruction at the interface of an adsorbed C< sub> 60 overlayer. Chemical physics letters 213, 401–406 (1993). Gimzewski, J. K. et al. Near Field Optics 333–340 (Springer Netherlands, 1993). Gimzewski, J. K., VATEL, O. & Hallimaoui, A. Ph. DUMAS*, M. GU*, C. SYRYKH*, F. SALVAN*,* GPEC, URA CNRS 783, Fac. de Luminy, 13288, Marseille Cedex 9, France. Optical Properties of Low Dimensional Silicon Structures: Proceedings of the NATO Advanced Research Workshop, Meylan, France, March 1-3, 1993 244, 157 (1993). Berndt, R. R. J. K. B. R. R. W. D. M. et al. Photon emission at molecular resolution induced by a scanning tunneling microscope. Science 262, 1425–1427 (1993). Berndt, R. et al. Photon emission from adsorbed C60 molecules with sub-nanometer lateral resolution. Applied Physics A 57, 513–516 (1993). Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from nanostructures in an STM. Nanostructured Materials 3, 345–348 (1993).
CommonCrawl
Social Indicators Research January 2016 , Volume 125, Issue 2, pp 589–612 | Cite as Measuring Urban Agglomeration: A Refoundation of the Mean City-Population Size Index Andre Lemelin Fernando Rubiera-Morollón Ana Gómez-Loscos In this paper, we put forth the view that the potential for urbanization economies increases with interaction opportunities. From that premise follow three fundamental properties that an agglomeration index should possess: (1) to increase with the concentration of population and conform to the Pigou–Dalton transfer principle; (2) to increase with the absolute size of constituent population interaction zones; and (3) to be consistent in aggregation. Limiting our attention to pairwise interactions, and invoking the space-analytic foundations of local labor market area (LLMA) delineation, we develop an index of agglomeration based on the number of interaction opportunities per capita in a geographical area. This leads to Arriaga's mean city-population size, which is the mathematical expectation of the size of the LLMA in which a randomly chosen individual lives. The index has other important properties. It does not require an arbitrary population threshold to separate urban from non-urban areas. It is easily adapted to situations where an LLMA lies partly outside the geographical area for which agglomeration is measured. Finally, it can be satisfactorily approximated when data is truncated or aggregated into size-classes. We apply the index to the Spanish NUTS III regions, and evaluate its performance by examining its correlation with the location quotients of several knowledge intensive business services known to be highly sensitive to urbanization economies. The Arriaga index's correlations are clearly stronger than those of either the classical degree of urbanization or the Hirshman–Herfindahl concentration index. Urban and regional economics Urbanization Agglomeration economies Indexes and Spain JEL Classification R11 R12 Appendix 1: Geometric Interpretation of the Agglomeration Index Define n 0 = 0 and let K be the number of LLMAs in geographical area under consideration. Then $$f_{i} = \frac{{n_{i} }}{{\sum\limits_{j = 1}^{K} {n_{j} } }} = \frac{{n_{i} }}{{\sum\limits_{j = 0}^{K} {n_{j} } }}$$ is the fraction of population residing in the ith LLMA (with f 0 = 0), and \(F_{i} = \sum\nolimits_{j = 1}^{i} {f_{j} } = \sum\nolimits_{j = 0}^{i} {f_{j} }\) is the cumulative distribution (with F 0 = 0).19 LLMAs are assumed to be ordered from smallest to largest. The area above the curve is computed as: $$I = \sum\limits_{i = 1}^{K} {\left( {1 - F_{i - 1} } \right)\;\left( {n_{i} - n_{i - 1} } \right)}$$ In our example, this is equal to 0.0242. Note that the first term of formula (14) is the area above the curve to the left of the first LLMA in Fig. 1. This reflects the fact that LLMAs cover the whole territory, so that the threshold between urban and non-urban is irrelevant. The first term in (15) is equal to the size of the smallest LLMA: $$\left( {1 - F_{0} } \right)\;\left( {n_{1} - n_{0} } \right) = \left( {1 - 0} \right)\;\left( {n_{1} - 0} \right) = n_{1}$$ Equation (15) can be written as: $$I = \sum\limits_{i = 1}^{K} {\left( {1 - \sum\limits_{j = 0}^{i - 1} {f_{j} } } \right)\;\left( {n_{i} - n_{i - 1} } \right)}$$ $$I = \sum\limits_{i = 1}^{K} {\left( {\sum\limits_{j = i}^{K} {f_{j} } } \right)\;\left( {n_{i} - n_{i - 1} } \right)}$$ $$I = \sum\limits_{i = 1}^{K} {\left( {\sum\limits_{j = i}^{K} {f_{j} } } \right)\;n_{i} } - \sum\limits_{i = 1}^{K} {\left( {\sum\limits_{j = i}^{K} {f_{j} } } \right)\;n_{i - 1} }$$ Remembering that n 0 = 0, $$I = \sum\limits_{i = 1}^{K} {\left( {\sum\limits_{j = i}^{K} {f_{j} } } \right)\;n_{i} } - \sum\limits_{i = 1}^{K - 1} {\left( {\sum\limits_{j = i + 1}^{K} {f_{j} } } \right)\;n_{i} }$$ $$I = f_{K} n_{K} + \sum\limits_{i = 1}^{K - 1} {\left( {\sum\limits_{j = i}^{K} {f_{j} } } \right)\;n_{i} } - \sum\limits_{i = 1}^{K - 1} {\left( {\sum\limits_{j = i + 1}^{K} {f_{j} } } \right)\;n_{i} }$$ $$I = f_{K} n_{K} + \sum\limits_{i = 1}^{K - 1} {\left( {\sum\limits_{j = i}^{K} {f_{j} } - \sum\limits_{j = i + 1}^{K} {f_{j} } } \right)\;n_{i} }$$ $$I = f_{K} n_{K} + \sum\limits_{i = 1}^{K - 1} {f_{i} n_{i} } = \sum\limits_{i = 1}^{K} {f_{i} n_{i} }$$ which is exactly Eq. (3). Appendix 2: Transfer Principle A key property of the index is that it correctly reflects the change in the potential for interactions and urbanization economies of any reallocation of population. This property is close to the Pigou–Dalton transfer principle for measures of inequality, which states that any change in the distribution that unambiguously reduces inequality must be reflected in a decrease in its measure. Let Δn i represent the change in the relative population size of the ith LLMA. A reallocation of population is restricted by the condition that \(\sum\limits_{i = 1}^{k} {\Delta n_{i} = 0}\). Any reallocation can be represented as a series of reallocations between two LLMAs, and any reallocation between two LLMAs can be represented as a series of reallocations between an LLMA and the following or preceding one when LLMAs are ordered according to size. Therefore, we need only to consider a reallocation of population from the (s–1)th LLMA to the sth (from a LLMA to the next higher ranking one in terms of size): $$\Delta ns = \, {-}\Delta ns{-} 1 { } > \, 0,{\text{ and}}\;\Delta ni = \, 0{\text{ for}}i \ne s,s{-} 1$$ According to our theoretical a priori, such a reallocation raises the potential for interactions. What effect does it have on the index? Following Eq. (2), define:20 $$\Delta f_{i} = \frac{{\Delta n_{i} }}{{\sum\limits_{j = 1}^{K} {n_{j} } }} = \frac{{\Delta n_{i} }}{{\sum\limits_{j = 0}^{K} {n_{j} } }}$$ where, in view of (26), $$\sum\limits_{j = 1}^{K} {n_{j} } - \Delta n_{s - 1} + \Delta n_{s} = \sum\limits_{j = 1}^{K} {n_{j} }$$ $$\Delta f_{s} = - \Delta f_{s - 1} > 0$$ The value of the index after the reallocation is: $$I^{\prime} = \sum\limits_{i \ne s - 1,s}^{{}} {f_{i} n_{i} } + \left( {f_{s - 1} - \Delta f_{s} } \right)\left( {n_{s - 1} - \Delta n_{s} } \right) + \left( {f_{s} + \Delta f_{s} } \right)\left( {n_{s} + \Delta n_{s} } \right)$$ $$I^{\prime} = \sum\limits_{i \ne s - 1,s}^{{}} {f_{i} n_{i} } + f_{s - 1} n_{s - 1} + f_{s} n_{s} + \left( {f_{s} - f_{s - 1} } \right)\Delta n_{s} + \Delta f_{s} \left( {n_{s} - n_{s - 1} } \right) + 2\Delta f_{s} \Delta n_{s}$$ $$I^{\prime} = \sum\limits_{i = 1}^{K} {f_{i} n_{i} } + \left( {f_{s} - f_{s - 1} } \right)\Delta n_{s} + \Delta f_{s} \left( {n_{s} - n_{s - 1} } \right) + 2\Delta f_{s} \Delta n_{s}$$ $$I^{\prime} = I + \left( {f_{s} - f_{s - 1} } \right)\Delta n_{s} + \Delta f_{s} \left( {n_{s} - n_{s - 1} } \right) + 2\Delta f_{s} \Delta n_{s}$$ Given that the LLMAs are ordered from the smallest to the largest, ns > ns–1, and fs > fs–1, so that I' > I. Appendix 3: Relationship With the Pareto Distribution The empirically estimated exponent of the Pareto city-size distribution (a generalization of Zipf's rank-size rule) has been used as a measure of the concentration of an urban system (Rosen and Resnick 1980). Following the notation established above, the (discrete) Pareto distribution can be written as: $$K + 1 - i = An_{i}^{ - a}$$ where K is the number of cities (ranked from the smallest to the largest), n i is the size of city i,21 and A and a are parameters. Parameter A can be calibrated from the size of the largest city: $$K + 1 - K = 1 = An_{K}^{ - a}$$ $$A = n_{K}^{a}$$ Inverting (34), we obtain: $$n_{i}^{a} = \frac{A}{K + 1 - i}$$ $$n_{i} = \left( {\frac{A}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}}$$ Total urban population is: $$N = \sum\limits_{i = 1}^{K} {n_{i} } = \sum\limits_{i = 1}^{K} {\left( {\frac{A}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} }$$ And so it is quite straightforward to construct a cumulative distribution similar to the one in Fig. 1 reflecting a theoretical Pareto distribution. It is then possible to apply our proposed index to a theoretical Pareto distribution using formula (3). There results $$I = \frac{{\sum\limits_{i = 1}^{K} {\left( {\frac{A}{K + 1 - i}} \right)^{{{2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} } }}{{\sum\limits_{i = 1}^{K} {\left( {\frac{A}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} } }}$$ where we exploit the identity in Eq. (3).22 If we assume that the number of cities K and the size of the largest city n K are fixed, then, using (36), (40) can be written as:23 $$I = \frac{{\sum\limits_{i = 1}^{K} {\left( {\frac{{n_{K}^{a} }}{K + 1 - i}} \right)^{{{2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} } }}{{\sum\limits_{i = 1}^{K} {\left( {\frac{{n_{K}^{a} }}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} } }}$$ $$I = \frac{{\sum\limits_{i = 1}^{K} {n_{K}^{2} \left( {\frac{1}{K + 1 - i}} \right)^{{{2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} } }}{{\sum\limits_{i = 1}^{K} {n_{K} \left( {\frac{1}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} } }}$$ $$I = n_{K} \frac{{\sum\limits_{i = 1}^{K} {\left( {\frac{1}{K + 1 - i}} \right)^{{{2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} } }}{{\sum\limits_{i = 1}^{K} {\left( {\frac{1}{K + 1 - i}} \right)^{{{1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} } }}$$ The derivative of the index relative to the Pareto parameter is 24 $$\frac{\partial I}{\partial a} = n_{K} \left( \frac{1}{a} \right)^{2}\; \frac{{2\sum\limits_{j = 1}^{K} {\;\sum\limits_{i = 1}^{K} {\left[ {j^{{^{{ - {1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} }} i^{{^{{ - {2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} }} \ln i} \right]} } - \sum\limits_{j = 1}^{K} {\;\sum\limits_{i = 1}^{K} {\left[ {j^{{^{{ - {2 \mathord{\left/ {\vphantom {2 a}} \right. \kern-0pt} a}}} }} i^{{^{{ - {1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} }} \ln i} \right]} } }}{{\left( {\sum\limits_{i = 1}^{K} {i^{{^{{ - {1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} }} } } \right)^{2} }}$$ The sign of that derivative is the sign of its numerator, but we could not determine that sign analytically. Using numerical simulations,25 we obtain that the derivative is negative for low values of a, and positive for high values. The sign reversal of the derivative is explained by the fact that, for a given number of cities, the size of the smallest city under the rank-size rule, \(n_{1} = n_{K} K^{{^{{ - {1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-0pt} a}}} }}\), increases with a, leaving a larger gap to the left of the first point on the cumulative distribution (see Fig. 1). Referring to index computation formula (7), it is easily verified that its first term is equal to n 1. Indeed, our numerical simulations confirm that, if that first term is omitted, our index is a monotonically decreasing function of parameter a. This is illustrated in Fig. 7). Relationship of the proposed index to the Pareto elasticity parameter Adelman, M. A. (1969). Comment on the "H" concentration measure as a numbers-equivalent. The Review of Economics and Statistics, 51(1), 99–101.CrossRefGoogle Scholar Alfonso, A., & Venâncio, A. (2013). The relevance of commuting zones for regional spending efficiency. Working Paper 17/2013/DE/UECE/ADVANCE, Department of Economics, School of Economics and Management, Technical University of Lisbon. http://pascal.iseg.utl.pt/~depeco/wp/wp172013.pdf 2014-01-10 Arriaga, E. E. (1970). A new approach to the measurements of urbanization. Economic Development and Cultural Change, 18(2), 206–218.CrossRefGoogle Scholar Arriaga, E. E. (1975). "Selected measures of urbanization", Chap. II. In S. Goldstein, D. F. Sly (Eds.), The measurement of urbanization and projection of urban population, Working Paper 2, International Union for the scientific study of population. Committee on urbanization and population redistribution. Ordina Editions, Dolhain, Belgium.Google Scholar Boix, R., & Galleto, V. (2006). Identificación de Sistemas Locales de Trabajo y Distritos Industriales en España. Dirección General de Política de la Pequeña y Mediana Empresa, Ministerio de Industria, Comercio y Turismo.Google Scholar Bond, S., & Coombes, M. (2007). 2001-based Travel-To-Work Areas Methodology. Office for National Statistics. Retrieved from 13 January 2013. http://www.ons.gov.uk Brülhart, M., & Sbergami, F. (2009). Agglomeration and growth: Cross-country evidence. Journal of Urban Economics, 65, 48–63.CrossRefGoogle Scholar Brülhart, M., & Traeger, R. (2005). An account of geographic concentration patterns in Europe. Regional Science and Urban Economics, 35, 597–624.CrossRefGoogle Scholar Capello, R., & Camagni, R. (2000). Beyond optimal city size: An evaluation of alternative growth patterns. Urban Studies, 37(9), 1479–1496.CrossRefGoogle Scholar Casado-Díaz, J. M. (2000). Local labour market areas in Spain: A case study. Regional Studies, 34, 843–856.CrossRefGoogle Scholar Cowell, F. A. (2009). Measuring inequality, LSE perspectives on economic analysis. Oxford: Oxford University Press.Google Scholar Crédit Suisse Research Institute. (2012). Opportunities in an Urbanizing World, Zurich, Switzerland. Retrieved from 16 Janaury 2014. https://www.credit-suisse.com/ch/fr/news-and-expertise/research/credit-suisse-research-institute/publications.html Dalton, H. (1920). The measurement of the inequality of incomes. The Economic Journal, 30(119), 348–361.CrossRefGoogle Scholar Daniels, P. (1985). Service industries: A geographical perspective. New York: Methuen.Google Scholar DATAR-DARES-INSEE (2011). Atlas des zones d'emploi 2010. Délégation interministérielle à l'Aménagement du Territoire et à l'Attractivité régionale (DATAR), Direction de l'Animation de la Recherche, des Études et des Statistiques (DARES) and Institut National de la Statistique et des Études Économiques (INSEE). Retrieved from 13 January 2014. http://www.insee.fr/fr/themes/detail.asp?reg_id=0&ref_id=atlas-zone-emploi-2010 Excerpts from this document are downloadable from F. Sforzi's homepage. Retrieved from 23 January 2014. http://economia.unipr.it/DOCENTI/SFORZI/docs/files/SISTEMI_LOCALI.PDF Fernández, E., & Rubiera, F. (2012). Defining the spatial scale in modern economic analysis: New challenges from data at local level. Advances in spatial science series. Berlin: Springer.CrossRefGoogle Scholar Glaeser, E. L., Kallal, H. D., Scheinkman, J. A., & Schleifer, A. (1992). Growth in cities. Journal of Political Economy, 100(6), 1126–1152.CrossRefGoogle Scholar Goodyear, R. (2008). Workforces on the move: An examination of commuting patterns to the cities of Auckland, Wellington and Christchurch. Paper presented at NZAE conference, Wellington City, New Zealand, July 2008. Statistics New Zealand. Retrieved from 10 January 2014. http://www.stats.govt.nz/methods/research-papers/nzae/nzae-2008/workforces-on-the-move.aspx Henderson, J. V. (1988). Urban development: Theory, fact and illusion. Oxford: Oxford University Press.Google Scholar Henderson, J. V. (2003a). Urbanization and economic development. Annals of Economics and Finance, 4, 275–341.Google Scholar Henderson, J. V. (2003b). The urbanization process and economic growth: The So-What question. Journal of Economic Growth, 8, 47–71.CrossRefGoogle Scholar Hoover, E. M. (1937). Location theory and the shoe and leather industry. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar Illeris, S. (1996). The service economy: A geographical approach. Chichester, U.K.: Wiley.Google Scholar INE. (2001). Censo de Población, 2001. Instituto Nacional de Estadística. http://www.ine.es Isard, W. (1956). Location and space-economy. Cambridge, MA: The Technology Press of Massachusetts, Institute of Technology.Google Scholar ISTAT. (1997). I Sistemi Locali del Lavoro 1991. A cura di F. Sforzi, Collana Argomenti 10, Instituto Nazionale di Statistica, Roma.Google Scholar ISTAT. (2005). I Sistemi Locali del Lavoro. Censimento 2001. Dati definitivi, Instituto Nazionale di Statistica. Retrieved from 10 January 2014. http://dawinci.istat.it/daWinci/jsp/MD/download/sll_comunicato.pdf ISTAT. (2006). 8o Censimento generale dell'industria e dei servizi. Distretti industriali e sistemi locali del lavoro 2001. Collana Censimenti, Instituto Nazionale di Statistica. Retrieved from 10 January 2014. http://www.istat.it/it/files/2011/01/Volume_Distretti1.pdf Lemelin, A., Rubiera-Morrollón F., & Gómez-Loscos, A. (2012). « A territorial index of potential agglomeration economies from urbanization » , Montréal, INRS-UCS, coll. Inédits, 2012–03. http://www.ucs.inrs.ca/sites/default/files/centre_ucs/pdf/Inedit03-12.pdf Munro, A., Alasia, A., & Bollman, R. D. (2011). "Self-contained labour areas: A proposed delineation and classification by degree of rurality", Rural and Small Town Canada Analysis Bulletin, 8(8), Catalogue no. 20-006-X, Statistics Canada. Retrieved from 16 January 2014. http://www.statcan.gc.ca/pub/21-006-x/21-006-x2008008-eng.htm Ohlin, B. (1933). Interregional and internal trade. Cambridge, MA: Harvard University Press.Google Scholar ONS. (2007). Introduction to the 2001-based Travel-to-Work Areas. ONS. Retrieved from 13 January 2013. http://www.ons.gov.uk Papps, K. L., & Newell, J. O. (2002). Identifying functional labour market areas in New Zealand: A reconnaissance study using travel-to-work data. Discussion Paper No. 443, IZA (Institute for the study of labor), Bonn, Germany. Retrieved from 10 January 2014. http://ftp.iza.org/dp443.pdfGoogle Scholar Pigou, A.C. (1912). Wealth and Welfare. MacMillan, London. Retrieved from 16 January 2014. https://archive.org/details/cu31924032613386 Polèse, M., Shearmur, R., & Rubiera, F. (2007). Observing regularities in location patters. An analysis of the spatial distribution of economic activity in Spain. European Urban and Regional Studies, 14(2), 157–180.CrossRefGoogle Scholar Rosen, K., & Resnick, M. (1980). The size distribution of cities: An examination of the pareto law and primacy. Journal of Urban Economics, 8, 165–186.CrossRefGoogle Scholar Rubiera, F., & Viñuela, A. (2012). "From funtional areas to analytical regions, where the agglomeration economies make sense", Chap. 2, p. 23–44. In: E. Fernández, F. Rubiera (Eds.) Defining the spatial scale in modern economic analysis: New challenges from data at local level. Springer.Google Scholar Sforzi, F. (2012). "From administrative spatial units to local labour market areas", Chap. 1, p. 3–21. In: E. Fernández, F. Rubiera (Eds.) Defining the spatial scale in modern economic analysis: New challenges from data at local level. Springer.Google Scholar Shearmur, R., & Doloreux, D. (2008). Urban hierarchy or local milieu? High-order producer service and (or) knowledge-intensive business service location in Canada, 1991–2001. Professional Geographer, 60(3), 333–355.CrossRefGoogle Scholar Spence, M., Annez, P. C., & Buckley, R. M. (2009). Urbanization and Growth. Washington, D.C: Commission on Growth and Development, The World Bank.Google Scholar Statistics New Zealand. (2009). Workforces on the move: Commuting patterns in New Zealand. Statistics New Zealand. Retrieved from 14 January 2014. http://www.stats.govt.nz/browse_for_stats/people_and_communities/Geographic-areas/commuting-patterns-in-nz-1996-2006.aspx Tolbert, C. M.& Sizer, M. (1996). U.S. Commuting Zones and Local Market Areas. A 1990 Update. ERS Staff Paper, Rural Economy Division, Economic Research Service, U.S. Department of Agriculture. Retrieved from 10 January 2014. https://usa.ipums.org/usa/resources/volii/cmz90.pdf Uchida, H., & Nelson, A. (2010). Agglomeration index: Towards a new measure of urban concentration. In J. Beall, B. Guha-Khasnobis, & R. Kanbur (Eds.), Urbanization and development. Oxford: Oxford University Press.Google Scholar USDA ERS. (2012). "Commuting Zones and Labor Market Areas: Documentation" (Web document), Economic Research Department, U.S. Department of Agriculture. Retrieved from 14 January 2014. http://www.ers.usda.gov/data-products/commuting-zones-and-labor-market-areas/documentation Weber, A. (1909). Ûber den Standort der Industrien. Mohr, TÏbingen; translated by Friedrich, C. J. (1929) as Alfred Weber's Theory of the Location of Industries, Chicago, IL: University of Chicago Press.Google Scholar Wernerheim, M., & Sharpe, C. (2003). High-order producer services in metropolitan Canada: How footloose are they? Regional Studies, 37, 469–490.CrossRefGoogle Scholar Wheaton, W., & Shishido, H. (1981). Urban concentration, agglomeration economies, and the level of economic development. Economic Development and Cultural Change, 30, 17–30.CrossRefGoogle Scholar Zhu, Nong, Xubei Luo and Heng-fu Zou (2012): Regional differences in China's urbanization and its determinants, CEMA Working Papers 535, China Economics and Management Academy, Central University of Finance and Economics. Retrieved from 21 January 2014. http://ideas.repec.org/p/cuf/wpaper/535.html © Springer Science+Business Media Dordrecht 2014 1.INRS - Institut National du la Recherche ScientifiqueQuebecCanada 2.REGIOlabUniversity of OviedoOviedoSpain 3.Banco de EspañaMadridSpain Lemelin, A., Rubiera-Morollón, F. & Gómez-Loscos, A. Soc Indic Res (2016) 125: 589. https://doi.org/10.1007/s11205-014-0846-9 Publisher Name Springer Netherlands
CommonCrawl
Snowmobiles: REPACK Elden Ring: Deluxe Edition Crack Full Version [+ DLC]Free Registration Code Free For PC Published by: pauhela < Contact > Near: Unlisted Last Updated: July 15, 2022 at about 2pm The renowned fantasy action RPG title, Elden Ring Crack Mac, is coming to PlayStation Vita! An action RPG that tells a rich and long-lasting story, Elden Ring Crack Mac takes everything we have to offer in the RPG genre: intense battles, beautifully crafted dungeons and a huge story that unfolds over the course of years. As you embark on a new adventure, make your way as you choose, encounter new challenges and develop your character. ABOUT ELDEN RING FOR PLAYSTATION VITA Established as the console version of the award-winning action RPG, the console version of ELDEN RING is upscaled to meet the standards of PlayStation Vita. Enjoy the rich story, an exciting battle system, and a smooth online connection with others through the revolution of new technology. ABOUT ELEXION PRODUCTS Based in Tokyo, Japan, Elexion Products is developing new next-gen action RPGs for Nintendo Switch and PlayStation Vita. ABOUT ELEXION ENTERTAINMENT CO., LTD. Elexion Entertainment Co., Ltd. is a mobile game developer based in Tokyo, Japan. Founded in February 2016, the company is aiming to create future platforms for consoles, smartphones and other mobile devices. Their games include Sword Art Online: Fatal Bullet, Mobile Suit Gundam: Battle Operation 2, Senran Kagura Splatter Party and SaGa the Origin Miria. You can follow us on Twitter and Facebook for the latest news on Elexion Entertainment's games. © Elexion Entertainment Co. Ltd. 2016, 2017-2018LocationAddress: 607 Battle Creek Highway, Ypsilanti, Michigan 48198GPS: 42.129854, -84.379855View on MapLocal Directions: From I-94, exit 164 at Goetz Road. Go south on Goetz Road, past the first stop sign, past the entrance to Mecosta County Fairgrounds on the right. Proceed through the lights, and continue straight on Goetz Road, which is the first street on the right after crossing the bridge on the road coming from Highland. Go 0.7 mile to the stop sign. Turn right on Battle Creek Highway. Drive 2.1 miles to Wyman Road, turn right onto Wyman Road and go one mile to the community of Ypsilanti Township. \begin{document}$${E}_{1}({k}_{x},{ Elden Ring Features Key: Unique One-on-One Online Battles with a continuous, diverse story The story is played out through continuous online battles that allow you to fight 1 vs. 1 with another NPC character. You are given unique heroes belonging to different races (Elden and Kobold) who have different abilities, fighting styles, and skills. Although various techniques can be combined, a fight against other NPC characters is challenging and entertaining. Unique 'After Dungeon Battle' Feature A card game designed after the battles against different enemies. The game gives you diverse content depending on the types of enemies you defeat. Climbing combat The combat system supports the tactician style, where you strategy should be focused on the positioning and attacking of the surrounding walls as you move forward through the dungeon. Deja-vu effect of 'Play' mode When you join a dungeon, you are automatically transported to the location where you fought 'Play' battles. Revolutionary battle settings New content that offers a variety of battle settings with additional items and different characteristics of the rune card is added. Step-by-step battle method You can step through multiple turn options to activate skills while your character is retreating or as a result of an unforeseen situation that forces you to change the sequence of the weapon attacks. NEW ARTWORK: Steel all with grace. Roam the winds of destiny… Rising from the sands. Into the Endless Night. Elden Ring Crack + License Keygen Free X64 Submit a form on client side I have a situation wherein i need to submit a form on the client side. The server may be down or disconnected. The application wont be using ajax or any other request methods. I want to submit a form on the server side and then i want to redirect the user to some other page. How can i go about this. Please help.. I think that you have to take care of 2 problems: How to submit that form without server! How to redirect the user to some other page? Javascript or Ajax: Submitting a form with Javascript is pretty straightforward. If you're using a framework like jQuery, you can perform requests with AJAX and callbacks on success. If you want to present the user with a nice, flicker-free user interface, there are a bunch of ways to perform a redirect in javascript. If you really can't use an ajax request, then you're back to the javascript trick. By using request.getParameter("nameOfYourParameter"), you can get the value of the form element with the name of your parameter. (BTW, params are available in all JSP's and Servlets too, except in Elden Ring Crack + Free X64 Table Top Roulette. This is the most popular and general style of roulette you will find. The wheel is typically much larger than the others and it's somewhat a mix of European and American in style. There are two additional and separate single zero columns, each with 12 zeros and 12 noughts. You can lay bets on red or black, on the zero and on the numbers, just as with a full-size game. The table is relatively bigger than the others; the wheel is slightly bigger too. It's quite a mixture between the other two types, European and American. There is a standard layout of numbers from 1 to 36 along with 0 and 00. You can wager on red or black, and of course, on all six numbers. All in all, it's a great option for players who want to spin the wheel but don't enjoy betting on red or black. This is the smallest wheel type, and just like miniature versions, it has just six numbers. Each of the other five wheels has a single zero column and 11 noughts in the other columns. A standard layout of numbers from 0 to 36 is available along with 00 and 0. You can wager on red or black, and on three numbers. This is the most straightforward and honest of the table types. Mini Roulette Rules Mini roulette is one of the most popular table games on the Internet. It's great for mobile and tablet casinos due to its smaller, more accessible wheel and table. Our staff has tried out numerous games for players of all skill levels, and has compiled a shortlist of the greatest websites to play mini roulette online. Before you wager, you need to make sure that you have a good idea of the rules and regulations of this game. Mini roulette is a game designed to be played on a smaller wheel with a smaller table, and so certain rules apply. Similar rules will apply to any of the variant types, including the one most like the original European and American roulette. The caveat is that there will be a single zero column which will have only 12 zeros and 12 noughts. You can wager on red or black, or even on two numbers. This is a perfect choice for beginners and new players. You can place bets on red or black, and so can lay bets on any three numbers. If you bet on Producer Nobuhiro Nakabayashi said, "The Lands Between is a world that represents the beauty of its own land, and has a great myriad of diverse landscapes, so that many kinds of online play systems can be enabled. It is especially excellent as an online game that allows a deep and rich combination of the fully-fledged online system for people to feel as if they are walking around together with each other." Please note that the Summon Stone will not be included with the purchase of the game. In order to summon a White Heron, you must first equip the "Hawk Talisman" and set the skill name to "white heron." By playing The Lands Between: Summoning, you warrant that: – You will not do anything to deprive any third party of a right it would enjoy in a similar situation if the same circumstances arose. – You will not use any kind of method, not even a performance enhancing drug, to obtain an unfair advantage in this game or other performance enhancing methods at any time in the duration of this game (i.e. exam stress, games addiction, etc.). – You will not use any kind of method to gain an unfair advantage in any other performance enhancing methods at any time in the duration of this game (i.e. exam stress, games addiction, etc.). Please, bare in mind, after you take a purchase order for this product, the publisher and its representative shall have the right to check all the information stated above, that they provide customers. This is not a sales confirmation. If any of the above happens, please contact the publisher as soon as possible. Please note that any dispute arising out of this contract shall be heard and resolved by the Tokyo office of Square Enix Co., Ltd. With regard to the e-mail address you provide, we will strive to not inquire into or use it for any other purpose than notification in relation to this purchase, and agree not to sell or give it to third parties. We hereby declare that we will never inquire into or use, nor will we pass on to any third party, any information that could be used for marketing or advertisement. The Lands Between: Summoning requires both the physical sales receipt as well as a copy of the physical game for the sale of the game to be accepted by the publisher/distributor. The publisher reserves Free Download Elden Ring Crack For PC 1. Unpack the game. 2. Play setup from the Unpack folder. 3. Read the INF file to get a list of URLs where you can get additional game data. 4. Run the game from the Unpack folder. ELDEN RING For Windows 7 – Windows XP Elden Ring is a game that showcases nostalgia for the Elder Scrolls series with great production value and graphics. In this game you play a man driven by his ambitions. He must rise up from the lower class that he belongs to by fighting in the wars against the Dark Brotherhood. – First-person perspective. – Unique combat system. – Over 100 weapons to use in battle. – System of upgrades and upgrading your equipment. – Great graphics and easy controls. – Simple and fun game design. – Feel the other's presence. – Dynamic game world. – Different special skills and abilities. – A unique dramatic story. – Lots of quests to take care of. – Famous NPCs. – 100-hour long game. Release date: 5th of February 2011. Developer: Ulterium Software GmbH. Publisher: Ulterium Software GmbH. Genre: Action RPG. Languages: English and German. Ygvaren Games : Ygvaren Games Genre : RPG Les Cahiers de Rancage S04 : Rerun serie des studios Les Cahiers de Rancage. La vie est a vitesse sur le parking de la RER C, et C'est cela que l'on nomme "Rerun". Silencing the Skalir : Farbgeld : Grunzen um Ringe : Reign in Gode : Rerun 2 : Rerun 3 : Finale : Rerun serie des studios Les Cahiers de Rancage. La vie est a How To Crack Elden Ring: Unrar RAR. Extract to the game directory (usually C:\Program Files\R-edition\Bibliotheca) Run Setup.exe as administrator. Pyridoxal phosphate as hypoxanthine-guanine phosphoribosyltransferase inhibitor. Although the inhibition of guanine phosphoribosyltransferase (EC 2.4.2.8) by pyridoxal phosphate has been established, the competitive characteristics of inhibition have never been reported. The aim of the present investigation was to study the structure-activity relationships of pyrophosphate derivatives of pyridoxal. The strongest inhibitor investigated is the asymmetric compound 8, which has Ki 8.9 X 10(-9) M. Pyridoxal derivatives differing in C-2 substituents have a different effect on enzyme inhibition: the compound 1, which has a 2-formyl substituent, is almost inactive (Ki 1.8 X 10(-6) M), whereas the compounds 2, 4 and 5, which have a 2-hydroxyl, 2-methylene and 2-O-hydroxyl substituent respectively, are almost as potent as pyridoxal phosphate (KI 7.6 X 10(-9) M). The effects of these compounds on the enzymic activity are irreversible.Q: Rewriting TimeSpan I have an object called TimeSpan. In many pages I have a code like Request.Timeout = TimeSpan.FromMinutes(2.5); I have a big class that I use all the time and in it there is a method that has a big code that has the previous line in it. public TimeSpan timeout { get; set; } and I want to change the code to something like this: Request.Timeout = (2.5).Timeout; How can I do it? Do I have to change it in the whole class or there is a design pattern that allows this? Use Named property. Declare property like following to read the value from a config file. /// The Timeout public TimeSpan Timeout https://wakelet.com/wake/Xy4PxDBDQuX-NPpgEXUMD https://wakelet.com/wake/ltoHQDw69nGxipJaSrs75 https://wakelet.com/wake/s3RdMNKcoXhpkR-n130AK https://wakelet.com/wake/0uzbx_wOsCfPpqXsVvxok https://wakelet.com/wake/BfdnNq_qOiITqIRepV33Z OS: Windows XP Processor: Intel Pentium 4 3.0 GHz Graphics: 128 MB ATI Radeon X1300 Processor: Intel Core 2 Duo E6300 2.6 GHz Game Name: The Legend of Mana https://slitetitle.com/repack-elden-ring-deluxe-edition-skidrow-codex-dlc-download-x64-2022/ https://ystym.com/wp-content/uploads/2022/07/Elden_Ring-34.pdf https://axon-galatsi.com/elden-ring-crack-activation-code-skidrow-dlc-mac-win-2022-latest/ https://www.townteammovement.com/wp-content/uploads/2022/07/REPACK_Elden_Ring_Deluxe_Edition_Serial_Key__v_102__DLC_Free_Download_For_PC_2022.pdf http://skylightbwy.com/?p=8586 http://www.hakcanotel.com/?p=12777 https://www.isardinia.com/wp-content/uploads/2022/07/REPACK_Elden_Ring_Deluxe_Edition_crack_exe_file___SKiDROW_CODEX_v_102__DLC_Free_Updated.pdf https://shippingcasesnow.com/wp-content/uploads/nixsmid.pdf https://mysaidia.com/wp-content/uploads/2022/07/wainbrid.pdf http://epicphotosbyjohn.com/?p=26803 https://zeecanine.com/repack-elden-ring-deluxe-edition-with-license-key-skidrow-v-1-02-dlcproduct-key/ https://djolof-assurance.com/wp-content/uploads/2022/07/Elden_Ring-17.pdf http://www.gurujijunction.com/uncategorized/repack-elden-ring-deluxe-edition-dlclicense-keygen/
CommonCrawl
Org: Abba Gumel (Manitoba) AHMED ABDELRAZEC, York University Spread and control of dengue with limited public health resources [PDF] A deterministic model for the transmission dynamics of a dengue disease, with a nonlinear recovery rate reflecting the public health resources, is formulated to study the impact of available resource of the health system on the spread and control of dengue fever. Model results indicate the existence of multiple endemic equilibria; one of them can be driven to change stability, a Hopf bifurcation occurs when parameters vary, in particular the one representing the public health resource. Additionally, our model exhibits the phenomenon of backward bifurcation as a common feature of vector-borne diseases. Our model and results can be helpful for public health plan the resources essential for control of dengue disease. CAMERON BROWNE, Vanderbilt University A Nosocomial Epidemic Model with Room Contamination [PDF] Nosocomial infections, i.e. hospital-acquired infections, are a major public health concern, especially in light of the spread of antibiotic-resistant bacteria. In this talk, I present a model of epidemic bacterial infections in hospitals which incorporates the infection of patients and contamination of healthcare workers due to environmental causes. The basic reproduction number, $\mathcal R_0$, is defined and asymptotic dynamics are analyzed. Under certain conditions, it is proved that the disease-free equilibrium is globally stable when $\mathcal R_0<1$. However, in general the disease-free equilibrium is only locally stable when $\mathcal R_0<1$ and there can be multiple positive steady states in this case. Numerical simulations are conducted and the model is interpreted to provide insight for controlling nosocomial epidemics. Furthermore, the problem of antibiotic resistance, along with potential intervention strategies, are discussed. GERDA DE VRIES, University of Alberta Formation of Animal Groups: The Importance of Communication [PDF] We investigate the formation and movement of self-organizing collectives of individuals in homogeneous environments. We review a hyperbolic system of conservation laws based on the assumption that the interactions governing movement depend not only on distance between individuals, but also on whether neighbours move towards or away from the reference individual. The inclusion of direction-dependent communication mechanisms significantly enriches the model behavior; the model exhibits classical patterns such as stationary pulses and traveling trains, but also novel patterns such as zigzag pulses, breathers, and feathers. The same enrichment of model behavior is observed when we include direction-dependent communication mechanisms in individual-based models. THOMAS HILLEN, University of Alberta Mathematical Modelling with Fully Anisotropic Diffusion [PDF] Anisotropic diffusion describes random walk with different diffusion rates in different directions. I will present a form of anisotropic diffusion which is called "fully" anisotropic. The fully anisotropic diffusion model does not obey a maximum principle and can even lead to singularity formation in infinite time. I will derive this model from biological principles, analyse some of its behavior and show how it can be used to model glioma spread and wolf movement. ALI JAVAME, University of Manitoba Role of Pap Screening on HPV Transmission Dynamics [PDF] Human papillomavirus (HPV), a major sexually-transmitted disease, is known to be the causative agent of cervical cancer (in addition to causing many other cancers in both males and females). Each year, 500,000 women develop cervical cancer (and about 50% of these women succumb to the cancer). The talk is based on the design and rigorous qualitative analysis of a new deterministic model for the transmission dynamics of HPV (and related cancers) in a community, in the presence of Pap cytology screening. JUNLING MA, University of Victoria Modeling SIS disease dynamics on random contact networks [PDF] Contact networks represent persons by nodes and contacts by edges. It is a more realistic model of disease related human contacts than the random mixing model, which assumes that every pair of individuals have identical contact rate. An effective degree SIS epidemic model was developed before, and was shown to have different disease thresholds than an SIR model. This contradicts with the prediction of classic disease models that SIR and SIS models should have the same disease threshold. But this effective degree model is too complex to derive a closed formula for the disease threshold. In this talk, I will introduce a simplified SIS model on random contact networks, which agrees with stochastic simulations and is mathematically tractable. The model yields a disease threshold formula that bears a clear biological meaning: for the disease to spread, the average number of transmissible neighbours times the average number of times a neighbour can be infected must be greater than unity. The threshold converges to that of the SIR model under the homogeneous mixing limit. DESSALEGN MELESSE, University of Manitoba Understanding heterogeneity in HIV transmission dynamics among high risk populations: a mathematical modeling approach [PDF] Concentrated HIV epidemic is characterized by the transmission of HIV largely in defined vulnerable populations, namely high risk groups: injection drug users (IDUs), sex workers (female, male and transgender), and their sexual partners. As a result, targeted intervention strategies among high risk populations are seen as high public health priorities in many settings. However, heterogeneity in the mixing patterns among these high risk populations has been shown to sustain the epidemic and complicate the delivery of intervention strategies. This talk will focus on understanding heterogeneity in the dynamical interplay between high risk populations using mathematical modeling. This research is in progress. FERESHTEH NAZARI, Manitoba ZHIPENG QIU, Nanjing University of Scinece and Technology/York Yniversity Complex dynamics of a nutrient-plankton system with nonlinear phytoplankton morality and allelopathy [PDF] Understanding the plankton dynamics can potentially help us take effective measures to settle the critical issue on how to keep plankton ecosystem balance. In this paper, a nutrient-phytoplankton-zooplankton (NPZ) model is formulated to get insight into understanding the mechanism of plankton dynamics. To account for the harmful effect of the phytoplankton allelopathy, a prototype for a non-monotone response function is used to model zooplankton grazing, and nonlinear phytoplankton mortality is also included in the NPZ model. The main purpose of the paper is to analyze the complex dynamics of the NPZ model, particularly focus on understanding how the phytoplankton allelopathy and nonlinear phytoplankton mortality affect the plankton population dynamics. We first examine the existence of multiple equilibria and provide a detailed classification for the equilibria of the NPZ system, then stability and local bifurcation are also studied. Sufficient conditions for Hopf bifurcation, Bogdanov-Takens bifurcation and zero-Hopf bifurcation are given respectively. Numerical simulations are finally conducted to confirm and extend the analytic results. The theoretical and numerical findings imply that the phytoplankton allelopathy and nonlinear phytoplankton mortality may lead to a rich variety of complex dynamics of the nutrient-plankton system. The results of this study suggest that the effects of the phytoplankton allelopathy and nonlinear phytoplankton mortality should be received additional consideration in understanding the mechanism of plankton dynamics. MICHAEL YODZIS, University of Guelph Dynamics of Pollution-induced Illnesses in Fishing Communities, with Social Feedbacks [PDF] Pollution-induced illnesses are caused by toxicants that result from human activity and should be entirely preventable. However, social pressures and misperceptions can undermine the efforts to limit pollution, and vulnerable populations can remain exposed for decades. This talk presents a human-environmental system model for the effects of water pollution on the health and livelihood of a fishing community in the developing world. It incorporates dynamic social feedbacks that determine how effectively the population recognizes the injured, and acts to reduce the pollution exposure. The model, which is motivated by an incident from 1949-1968 in Minamata, Japan (where methylmercury effluent from a local factory poisoned fish populations and humans who ate them), will be rigorously analysed to gain insight into its dynamical features. In particular, conditions that allow for the outbreak of a pollution-induced epidemic will be derived. This research is joint work with Dr. Chris Bauch. SANLING YUAN, University of Shanghai for Science and Technologgy; University of Victoria Dynamics of a stochastic model for algal bloom with and without distributed delay [PDF] In this talk, two stochastic models for algal bloom with and without distributed delay are investigated. We introduce white noise into the equation of algae population to describe the effects of environmental random fluctuations, and a delay into the nutrient equation to account for the time needed for the conversion of detritus into nutrient. The existence and uniqueness of the global positive solutions for both models are proved. By constructing Lyapunov function(al)s, sufficient conditions for the stochastic stability of the washout equilibrium are obtained for both models. Furthermore, for the model without delay, we give an estimate of the deviation of the solutions to the stochastic model from the positive equilibrium of its corresponding determinate model; for the delayed model, our theoretical results show that it has the same long term behavior as the one without delay, which means that the delay does not affect the long term behavior of the system, though the numerical simulations reveal that it may reduce the level of algae population initially. HUAIPING ZHU, York Modeling and forecasting of West Nile virus [PDF] West Nile virus is a mosquito-borne flavivirus typically transmitted between birds and mosquitoes, and can infect humans and other domestic mammals. It has become a threat for public health since 1999 in North America. Like other mosquito-borne or vector-borne diseases, the transmission and dynamics of the West Nile virus can be very complicated due to climate and environmental impact on vector mosquitoes density, seasonal impact on amplification host birds and biting incidences of the vectors. In this talk, I will talk about the modeling and dynamics of the virus, including bifurcation analysis of some compartmental models. I will briefly introduce our effort on using surveillance data, weather and landscape data to model and weekly real-time forecast of culex mosquito abundance, minimum infection rate (MIR) for risk assessment and human infection of West Nile virus in Ontario, an effort towards toolkit development for public health, and the establishment of Early Warning and Response System (EWARS) for vector-borne diseases in Ontario, Canada.
CommonCrawl
On the set of harmonic solutions of a class of perturbed coupled and nonautonomous differential equations on manifolds Minimizers of anisotropic perimeters with cylindrical norms July 2017, 16(4): 1455-1470. doi: 10.3934/cpaa.2017069 Damping to prevent the blow-up of the korteweg-de vries equation Pierre Garnier Laboratoire Amiénois de Mathématique Fondamentale et Appliquée, CNRS UMR 7352, Université de Picardie Jules Verne, 80039 Amiens, France Received April 2016 Revised February 2017 Published April 2017 We study the behavior of the solution of a generalized damped KdV equation $u_t + u_x + u_{xxx} + u^p u_x + \mathscr{L}_{\gamma}(u)= 0$. We first state results on the local well-posedness. Then when $p \geq 4$, conditions on $\mathscr{L}_{\gamma}$ are given to prevent the blow-up of the solution. Finally, we numerically build such sequences of damping. Keywords: KdV equation, dispersion, dissipation, blow-up. Mathematics Subject Classification: 35B44, 35Q53, 76B03, 76B15. Citation: Pierre Garnier. Damping to prevent the blow-up of the korteweg-de vries equation. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1455-1470. doi: 10.3934/cpaa.2017069 C. J. Amick, J. L. Bona and M. E. Schonbek, Decay of solutions of some nonlinear wave equations, J. Differential Equations, 81 (1989), 1-49. doi: 10.1016/0022-0396(89)90176-9. Google Scholar J. L. Bona, V. A. Dougalis, O. A. Karakashian and W. R. McKinney, The effect of dissipation on solutions of the generalized Korteweg-de Vries equation, J. Comput. Appl. Math., 74 (1996), 127-154. doi: 10.1016/0377-0427(96)00021-0. Google Scholar J. L. Bona and R. Smith, The initial-value problem for the Korteweg-de Vries equation, Philos. Trans. Roy. Soc. London Ser. A, 278 (1975), 555-601. doi: 10.1098/rsta.1975.0035. Google Scholar J. L. Bona and R. Smith, Existence of solutions to the Korteweg-de Vries initial value problem, Nonlinear wave motion (Proc. AMS-SIAM Summer Sem. , Clarkson Coll. Tech. , Potsdam, N. Y. , 1972), (1974), 179-180. Lectures in Appl. Math. , Vol. 15. Google Scholar M. Cabral and R. Rosa, Chaos for a damped and forced KdV equation, Phys. D, 192 (2004), 265-278. doi: 10.1016/j.physd.2004.01.023. Google Scholar J.-P. Chehab and G. Sadaka, On damping rates of dissipative KdV equations, Discrete Contin. Dyn. Syst. Ser. S, 6 (2013), 1487-1506. doi: 10.3934/dcdss.2013.6.1487. Google Scholar J. -P. Chehab and G. Sadaka, Numerical study of a family of dissipative KdV equations, Commun. Pure Appl. Anal., 12 (2013), 519-546. doi: 10.3934/cpaa.2013.12.519. Google Scholar B. Dubrovin, T. Grava and C. Klein, Numerical study of breakup in generalized Korteweg-de Vries and Kawahara equations, SIAM J. Appl. Math., 71 (2011), 983-1008. doi: 10.1137/100819783. Google Scholar J. Frauendiener and C. Klein, Hyperelliptic theta-functions and spectral methods: KdV and KP solutions, Lett. Math. Phys., 76 (2006), 249-267. doi: 10.1007/s11005-006-0068-4. Google Scholar J. -M. Ghidaglia, Weakly damped forced Korteweg-de Vries equations behave as a finitedimensional dynamical system in the long time, J. Differential Equations, 74 (1988), 369-390. doi: 10.1016/0022-0396(88)90010-1. Google Scholar J. -M. Ghidaglia, A note on the strong convergence towards attractors of damped forced KdV equations, J. Differential Equations, 110 (1994), 356-359. doi: 10.1006/jdeq.1994.1071. Google Scholar O. Goubet, Asymptotic smoothing effect for weakly damped forced Korteweg-de Vries equations, Discrete Contin. Dynam. Systems, 6 (2000), 625-644. Google Scholar O. Goubet and R. M. S. Rosa, Asymptotic smoothing and the global attractor of a weakly damped KdV equation on the real line, J. Differential Equations, 185 (2002), 25-53. doi: 10.1006/jdeq.2001.4163. Google Scholar T. Grava and C. Klein, Numerical study of a multiscale expansion of the Korteweg-de Vries equation and Painlevé-Ⅱ equation, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 464 (2008), 733-757. doi: 10.1098/rspa.2007.0249. Google Scholar Jr. R.J. Iório, KdV, BO and friends in weighted Sobolev spaces, Functional-analytic methods for partial differential equations, 1450 (1989), 104-121. doi: 10.1007/BFb0084901. Google Scholar D. J. Korteweg and G. de Vries, On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves, Philosophical Magazine Series 5, 39 (1895), 422-443. Google Scholar Y. Martel and F. Merle, Stability of blow-up profile and lower bounds for blow-up rate for the critical generalized KdV equation, Ann. of Math. (2), 155 (2002), 235-280. doi: 10.2307/3062156. Google Scholar E. Ott and R. N. Sudan, Damping of solitaries waves, Phys. Fluids, 13 (1970), 1432-1435. Google Scholar Figure 1. Initialization Figure 2. Dichotomy Figure 4. Find the damping Figure 5. At left, solution at different times $t=$ 0, 2, 4, 4.9925 and 5.3303. At right, $H^1$-norm and $L^2$-norm evolution without damping and a perturbed soliton as initial datum. Here $p=5$ Figure 6. At left, solution at different times $t=$ 0, 2, 5, 10, 11 and 11.3253. At right, $H^1$-norm and $L^2$-norm evolution with $\gamma_k=0.0025$ and a perturbed soliton as initial datum. Here $p=5$ Figure 7. At left, solution at different times $t=$ 0, 2, 5, 10, 15 and 20. At right, $H^1$-norm and $L^2$-norm evolution with $\gamma_k=0.0027$ and a perturbed soliton as initial datum. Here $p=5$ Figure 8. Example of a build damping. Here the initial datum is the perturbed soliton. Here $p=5$ Figure 9. At left, solution at different times $t=$ 0, 2, 5, 10, 15 and 20. At right, $H^1$-norm and $L^2$-norm evolution with $\gamma = \gamma_1$ and a perturbed soliton as initial datum. Here $p=5$ Figure 10. At left, solution at different times $t=$ 0, 2, 5, 7 and 7.928. At right, $H^1$-norm and $L^2$-norm evolution with $\gamma = \gamma_2$ and a perturbed soliton as initial datum. Here $p=5$ Min Zhu, Shuanghu Zhang. Blow-up of solutions to the periodic modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7235-7256. doi: 10.3934/dcds.2016115 Min Zhu, Ying Wang. Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 645-661. doi: 10.3934/dcds.2017027 Jinxing Liu, Xiongrui Wang, Jun Zhou, Huan Zhang. Blow-up phenomena for the sixth-order Boussinesq equation with fourth-order dispersion term and nonlinear source. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4321-4335. doi: 10.3934/dcdss.2021108 Masahiro Ikeda, Ziheng Tu, Kyouhei Wakasa. Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021011 Wenxia Chen, Jingyi Liu, Danping Ding, Lixin Tian. Blow-up for two-component Camassa-Holm equation with generalized weak dissipation. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3769-3784. doi: 10.3934/cpaa.2020166 Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843 Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1 Juan-Ming Yuan, Jiahong Wu. The complex KdV equation with or without dissipation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 489-512. doi: 10.3934/dcdsb.2005.5.489 Xiaojing Xu. Local existence and blow-up criterion of the 2-D compressible Boussinesq equations without dissipation terms. Discrete & Continuous Dynamical Systems, 2009, 25 (4) : 1333-1347. doi: 10.3934/dcds.2009.25.1333 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1875-1897. doi: 10.3934/cpaa.2018089 Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147 Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 Yohei Fujishima. Blow-up set for a superlinear heat equation and pointedness of the initial data. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4617-4645. doi: 10.3934/dcds.2014.34.4617 Dapeng Du, Yifei Wu, Kaijun Zhang. On blow-up criterion for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3639-3650. doi: 10.3934/dcds.2016.36.3639 István Győri, Yukihiko Nakata, Gergely Röst. Unbounded and blow-up solutions for a delay logistic equation with positive feedback. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2845-2854. doi: 10.3934/cpaa.2018134 Long Wei, Zhijun Qiao, Yang Wang, Shouming Zhou. Conserved quantities, global existence and blow-up for a generalized CH equation. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1733-1748. doi: 10.3934/dcds.2017072 Lili Du, Zheng-An Yao. Localization of blow-up points for a nonlinear nonlocal porous medium equation. Communications on Pure & Applied Analysis, 2007, 6 (1) : 183-190. doi: 10.3934/cpaa.2007.6.183 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
CommonCrawl
Population-wide analysis of differences in disease progression patterns in men and women David Westergaard ORCID: orcid.org/0000-0003-0128-84321, Pope Moseley1, Freja Karuna Hemmingsen Sørup1,2, Pierre Baldi3 & Søren Brunak1 Nature Communications volume 10, Article number: 666 (2019) Cite this article Sex-stratified medicine is a fundamentally important, yet understudied, facet of modern medical care. A data-driven model for how to systematically analyze population-wide, longitudinal differences in hospital admissions between men and women is needed. Here, we demonstrate a systematic analysis of all diseases and disease co-occurrences in the complete Danish population using the ICD-10 and Global Burden of Disease terminologies. Incidence rates of single diagnoses are different for men and women in most cases. The age at first diagnosis is typically lower for men, compared to women. Men and women share many disease co-occurrences. However, many sex-associated incongruities not linked directly to anatomical or genomic differences are also found. Analysis of multi-step trajectories uncover differences in longitudinal patterns, for example concerning injuries and substance abuse, cancer, and osteoporosis. The results point towards the need for an increased focus on sex-stratified medicine to elucidate the origins of the socio-economic and ethological differences. Sex- and gender-stratified medicine is an essential aspect of precision medicine. Sex and gender affect the manifestation and pathophysiology of many diseases1,2,3. Sex is defined as the biological component, while gender is a social construction as for example defined by the WHO4. Sex is a separate risk factor even when all other aspects have been taken into account5,6,7. Although sex is an important aspect of disease, many sex-specific analyses focus on one sex only and less on the comparative aspect8. Consequently, sex- and gender-medicine is generally understudied, and an increasing body of literature stresses the need to include both sexes in animal models, clinical trials, and healthcare planning policies8,9,10,11,12. Men and women are affected differently by disease, such as cardiovascular diseases, osteoporosis, and autoimmune diseases2,3,13,14,15,16. Furthermore, many prior studies also indicate a bias in diagnosis and treatment, for example that osteoporosis is underdiagnosed in men, while chronic obstructive lung disease is underdiagnosed in women16,17. Although earlier studies point to clear sex-specific differences in a number of disease states, they have not yet been complemented by multimorbidity studies that incorporate co-occurrence of other conditions in a systematic manner. Some co-occurring conditions display a consistent temporal progression trend. However, cross-sectional studies are time-unresolved and most cohort studies define a priori the temporal association between conditions when testing a specific hypothesis, and thus do not take into account the order in which conditions are observed in clinical care. Nonetheless, the etiology and outcome of single conditions will very often be related to their temporal context in terms of other conditions18,19,20. A temporal trend is a prerequisite for causality and should systematically be taken into consideration when studying patient-specific co-occurrences of conditions21,22. Incidence and temporality in diagnosis co-occurrence have been studied previously, but the focus has not been centered on sex-stratified differences20,23. We now present a retrospective cohort study based on the population-wide Danish National Patient Registry (NPR), where we examine sex-specific incidence, risk, and temporal aspects of diagnoses and co-occurrence of diagnosis related to disease and symptoms. Our findings indicate large discrepancies across all areas of disease. Diagnosis incidence and relation to age We analyzed hospital admissions from 6,909,676 patients (the whole Danish population during a 21-year period), of which 48.2% were women. We analyzed the incidence rate of 1369 ICD-10 level 3 diagnoses for men and women. A complementary analysis using the Global Burden of Disease (GBD) categories can be found in Supplementary Note 1. Incidence rates may be biased by age; thus we calculated the age-adjusted incidence rate (AIR) using the Eurostat 2013 standard population24. The Methods section contains a detailed account of the statistical model employed. We found that 344 and 473 diagnoses had a higher AIR in women and men, respectively (see Supplementary Data 1 for estimates and 95% Bayesian Credible Intervals (BCI)). Differences in incidence rates were not limited to a few particular disease areas, but distributed across the 18 ICD-10 chapters studied (Fig. 1a). Nonetheless, some ICD-10 chapters such as infectious diseases (ch1), neoplasms (ch2), circulatory system diseases (ch9), respiratory diseases (ch10), perinatal conditions (ch16), and injuries (ch19) had a higher AIR in men, on average. Conversely, endocrine and metabolic disorders (ch4), eye and adnexa diseases (ch7), skin diseases (ch12), musculoskeletal diseases (ch13), and congenital malformations (ch17) had a higher AIR in women, on average. A very similar pattern was observed when using the GBD categories (Supplementary Figure 1A). Considering the age of first hospital diagnosis we found 986 diagnoses in which the age was different for men or women (Welch's t test, FDR < 0.05) (see Supplementary Data 2 for mean values and 95% confidence intervals (CI)). We noticed that in the majority of cases, women were, on average, diagnosed at an older age than men (Fig. 1b, c). The only exceptions were neoplasms (ch2), blood and immune system diseases (ch3), and genitourinary system diseases (ch14). From the analysis using the GBD categories we also found that men were, in the majority of the cases, diagnosed at a younger age compared to women (Supplementary Figure 1B, C). Incidence and age at first hospital diagnosis of 1369 diagnoses. a 344 and 437 diagnoses were found to have a higher age-adjusted incidence rate in men and women, respectively. b Mean age at first diagnosis for each of the 1369 diagnoses studied. c Mean of the difference in age at first diagnosis. We found 963 diagnoses in which the age at first diagnosis was statistically significant when comparing men and women (Welch's t test, FDR < 0.05). Errors bars are the standard error of the mean per ICD-10 chapter Diagnosis co-occurrence Following frequency-based filtering, we analyzed 27,185 diagnosis co-occurrences, including both sex and non-sex-specific diagnoses (Fig. 2). In the analysis, we adjusted for a number of common confounding factors, including age, admission type, hospitalization month, and year by selecting a matched comparison group. Modifying disease definitions and diagnostic criteria may affect both incidence and prevalence25. Previous studies found that changes in diagnostic criteria increased the hospitalization rate for e.g. acute myocardial infarction (AMI), and increased the prevalence and shifted the age of diagnosis for autism26,27. The criteria for hospitalization year and month in our scheme negate this type of effect as well as any seasonal influence, which may change the incidence of, for instance, infectious diseases. The Methods section contains a detailed account on the statistical model (see Supplementary Data 3 for estimates and 95% BCI of relative risks and directionality). Diagnosis co-occurrences found in population-wide data from 6,909,676 patients. 951,509 ICD-10 level 3 diagnosis pairs were found to occur in the population; of these, a large number were filtered out due to low frequency (N < 100), dagger−asterisk combinations, or due to not passing the crude estimate of the relative risk. The standard method for calculating a confidence interval was applied in the prescreening section. Post-filtering 27,185 diagnosis pairs remained comprising 1360 unique diagnoses. Of these, 275 pairs involved a male-specific diagnosis and 1402 a female-specific diagnosis We found 12,122 directional pairs (defined as diagnosis co-occurrences that had an elevated relative risk and preferred statistical direction), when calculating the sex-adjusted RR. Remarkably, 4155 directional pairs (2055 in men and 2100 in women) were not common to both men and women. Hence, 4155 directional pairs are driven purely by one sex. This finding could be a result of a lack of power to detect the direction in either sex, but an analysis of the number of men and women diagnosed with the 12,122 pairs showed a high correlation (ρ = 0.861, 95% CI 0.857–0.863, Pearson correlation) (Supplementary Figure 2). For the 4155 directional pairs only, the correlation coefficient decreased slightly (ρ = 0.799, 95% CI 0.788−0.81, Pearson correlation). We performed a separate analysis of the excluded dagger−asterisk pairs and found that, overall a dagger code, the etiology, precedes an asterisk code, the manifestation (Supplementary Note 2). When taking sex into account, we found 9547 directional pairs in men and 10,380 directional pairs in women, respectively. Of these 6885 were shared leaving 2662 and 3495 unique pairs, respectively (reduced to 2514 and 2660 when not including sex-specific diagnoses). We examined the strength in directionality of the 6885 shared pairs (Supplementary Figure 3). We found that the variances of the two distributions were not equal, and that the distribution for women had a larger variance using both the ICD-10 (F = 0.8269, 95% CI 0.79−0.87, F test) and GBD (F = 0.52, 95% CI 0.42–0.66) terminologies. We noted that the distribution for women was skewed towards positive values, indicating a weaker trend in directionality compared to the sex-adjusted directionality overall. We also found that the majority of directional pairs included a nonchronic diagnosis, even when excluding the symptoms and injuries chapter (Supplementary Table 1). To obtain an overview of the anatomical and functional differences between men and women in terms of the directional pairs identified, we investigated the distribution over the 18 ICD-10 chapters. We only included directional pairs identified that were unique to one sex and at the same time did not include a sex-specific diagnosis (Fig. 3). We found that diagnosis pairs from perinatal conditions (ch16) and congenital malformations (ch17) were preferentially diagnosed first in both men and women, with the exception of "neoplasms (ch2) and congenital malformations" in women. Nonetheless, there were also incongruities, such as "genitourinary system diseases (ch14) and infections (ch1)", and "neoplasms (ch2) and digestive system diseases (ch11)". Using the GBD terminology we noticed one case in which infectious diseases (ch1) were diagnosed prior to mental disorders (ch5) (Supplementary Figure 4). This was, in fact, opposite to what the analysis using the ICD-10 terminology indicated. In men, the ICD-10 terminology indicated that skin diseases (ch12) were diagnosed prior to infectious diseases (ch1), and the opposite was found using the GBD. Temporal diagnosis co-occurrence across ICD-10 chapters. The distribution of 3186 and 3721 temporal diagnosis co-occurrences across ICD-10 chapters in men and women, respectively (non-sex-specific diagnoses). The color scale indicates the percentage of the pairs that has the temporal directionality from the horizontal chapter to the vertical chapter. Numbers in the boxes indicate the breakdown of the overall co-occurrence figures Seven combinations of chapters were found to be unequally represented, hereof five overrepresented in women (FDR ≤ 0.05, Fisher's exact test) (Supplementary Table 2). Diagnoses related to "neoplasms (ch2) and digestive system diseases (ch11)", and diagnoses regarding injuries (ch19) were overrepresented in men. Diagnoses related to "infectious diseases (ch1) and musculoskeletal diseases (ch13)", "neoplasms (ch2) and circulatory system diseases (ch9)", "respiratory diseases (ch10) and signs and symptoms (ch18)", "musculoskeletal diseases (ch13) and signs and symptoms (ch18)", and "musculoskeletal diseases (ch13) and circulatory system diseases (ch9)" were overrepresented in women. Risk factors, in this case an earlier diagnosis, may predispose men and women to some diseases unequally. We found 939 pairs where the relative risk of a future diagnosis was higher in one sex, compared to the other. We only examined pairs in which more than five men or women had been diagnosed with the two diagnoses in the preferred statistical direction. In 517 cases, women were at a higher risk, while men in 422 cases were at a higher risk (Supplementary Figure 5A). We identified several inconsistencies, such as "mental disorders (ch5) and neoplasms (ch2)", in which the overall trend for the chapters were in the opposite order. When we examined the distribution of ICD-10 chapters to which the event, i.e. the diagnosis following the exposure, belonged, we found nine chapters that were unevenly represented: endocrine and metabolic disorders (ch4), mental disorders (ch5), eye and Adnexa diseases (ch7), digestive system diseases (ch11), skin diseases (ch12), and musculoskeletal diseases (ch13) in men, while ear and mastoid diseases (ch8), respiratory diseases (ch10), and genitourinary system diseases (ch14) were overrepresented in women (FDR ≤ 0.05, Fisher's exact test) (Supplementary Table 3, Supplementary Figure 6A). We compared 302 of these findings to earlier reports by searching for mentions of both ICD-10 terms in PubMed and Google Scholar. Full text articles were inspected for evidence or mentions of sex-specific risk. In total, we found solid evidence for 42 co-occurrences in which there had been reported a difference between men and women (Supplementary Dataset 9). Of these, 33 articles agreed with our findings, five provided only weak evidence by mentions of sex as a risk factor and no quantitative estimate or reference. Four articles reported opposite conclusions. These four articles were based on cohort sizes ranging from 83 to 74,020 individuals. We noticed that the directional pairs with the largest difference in relative risk from the GBD analysis was centered on substance abuse and retroviral diseases, and disorders of psychological development. Additionally, we found that men with chronic obstructive pulmonary disease (COPD) were at a higher risk of lower respiratory infections and other respiratory disorders (Supplementary Data 7). Inspecting the median time difference between the first occurrences, we found that there were 1181 directional pairs in which the timespan was different in men and women (FDR ≤ 0.05, Mann−Whitney U test) (Supplementary Figure 5B). In 851 of these, the time-spans between the two diagnoses were higher in women, compared to men. Here three chapters were overrepresented: respiratory diseases (ch10) in women, and circulatory system diseases (ch9) and injuries (ch19) in men (Supplementary Table 4, Supplementary Figure 6B). At the extreme, the temporal relationship between exposure and event (e.g. diagnosis A and diagnosis B) may be reversed for men and women. This reversal could point to physiological or etiological differences or may also reflect diagnostic biases within the healthcare system. For example, our overall analysis indicated that ischemic heart disease (IHD, I25) precedes paroxysmal tachycardia (PT, I47). While this pattern holds for men, it is reversed in women; IHD precedes PT in men, and PT precedes IHD in women. Thus, men mediate the observed order of occurrence at a population-wide level (Fig. 4a). We identified 15 pairs using the ICD-10 terminology and one pair using the GBD terminology in which this reversal occurs, according to our criteria (Table 1). In ten cases, there were no preferred statistical direction at the population level, while the sex-specific preferred statistical direction was reversed. In the remaining five cases, the overall preferred statistical direction corresponded to the trend in men. In some cases, the pairs involved a chronic disease and a complication of this disease. Men were diagnosed with abscess of anal and rectal regions (K61) followed by Crohn's disease (K50) in 56 out of 100 cases, where women were diagnosed in the same order in 44 out of 100 cases (Fig. 4b). Eight of the reversed pairs describe conditions related to the bladder and kidney. From the GBD analysis we identified one relationship in which the order of diagnosis was reversed, namely "pancreatitis" and "gallbladder and biliary diseases". In men, "pancreatitis" was diagnosed prior to "gallbladder and biliary diseases", whereas the reverse was found in women. The directionality observed at the population level corresponded to that in men. Opposite temporal relationships in men and women. a At the population level, paroxysmal tachycardia (I47) is observed to be a complication of ischemic heart disease (I25). The sex-stratified analysis showed that this pattern only existed in men, and that the reversed pattern was significant in women. b At the population level, there was no preferred direction of diagnoses between Crohn's disease (K61) and abscesses of anal and rectal regions (K50). However, the sex-stratified analysis found that the directionality was reversed between men and women Table 1 Reversed directional comorbidities (including 95% BCI) Diagnosis trajectories Piecing together individual directional pairs may point towards overseen patterns and sex-related differences in a more extended temporal context. One framework for investing this is diagnosis trajectories19. We investigated the ten directional pairs (defined as a diagnosis co-occurrence with increased relative risk and preferred statistical direction) with the largest difference in relative risk between men and women. We found 230 linear diagnosis trajectories containing at least four diagnoses (followed by at least 100 patients) (Table 2). Clustering the individual trajectories together into one trajectory network that display one directional pair only once, we noticed that there were large disparities in diagnoses related to cancers, injuries, and drug and alcohol abuse (Supplementary Fig. 7). Several diagnoses related to fractures and injuries lead to or from alcohol abuse-related codes, with a higher relative risk in women. Moreover, women have a higher relative risk of hepatic failure following esophageal varices (Fig. 5). Second, cancers with well-known sex differences, such as thyroid cancer, bladder cancer and breast cancer are apparent (Fig. 6). The trajectory analysis was based on the most extreme directional pairs only; other diagnosis trajectories will result from including pairs with more moderate effect. For instance, there is a well-known connection between obstructive lung disease and osteoporosis28. Using the disease trajectory framework, we investigated the disease progression pattern in men and women (Fig. 7). We observed that obstructive lung diseases prior to an osteoporosis diagnosis tend to only occur in men, with the exception of asthma and acute bronchitis. Moreover, osteoporosis without fracture followed by osteoporosis with fracture occurred only in women. Upon further inspection, we observed that this was due to the pair not having a preferred directionality in men, but that the relative risk was still elevated. Table 2 Directional pairs used to construct linear trajectories Diagnosis trajectories involving injury or drug and alcohol abuse. A trajectory network combining 176 linear diagnosis trajectories related to alcohol and substance abuse (ten directional pairs with extreme differences in relative risk). Edges represent the connection between the diagnoses with directional co-occurrence. The orange and green edges between nodes indicate co-occurrences where the RR was elevated in women and men, respectively. The RR of injuries followed by alcoholic liver disease is increased in women. Furthermore, women have a higher RR of complications following esophageal varices, such as hepatic failure. RR relative risk Diagnosis trajectories related to cancer. A trajectory network combining 62 linear diagnosis trajectories related to cancer (the ten directional pairs with extreme differences in relative risk). The trajectories illustrate disease routes that are related to cancers in the thyroid gland and urinary tract. The progression pattern includes secondary neoplasms, renal complications, and sepsis. Color scale as in Fig. 5 Diagnosis trajectories related to obstructive lung disease and osteoporosis. A trajectory network combining 112 linear diagnosis trajectories including osteoporosis (M80, M81) and obstructive lung diseases (J40−J46). The orange edges indicate co-occurrences only present in women, and the green edges indicate co-occurrences only present in men. The trajectories illustrate how obstructive lung diseases are found as a risk factor for osteoporosis in men, but not women. Moreover, osteoporosis without fracture followed by osteoporosis with fracture was only found in women Putting these findings together, we focused on three areas of disease, in which we highlight specific differences (respiratory disorders, environmental disorders, and sarcoidosis) (Supplementary Note 3). This analysis identified sex-mediated temporal differences across nearly all major disease areas. Using data from a complete population with free and equal access to high-quality healthcare, we report sex-specific AIR, age of first hospital diagnosis, diagnosis co-occurrence, difference in risk, and timespan between diagnosis using two complementary terminologies29. We found that more than half of the ICD-10 diagnoses examined had a different AIR in men and women, and this percentage was even higher using the GBD categories. The age at first hospital diagnosis was, on average, higher in women, across nearly all areas of disease. We showed that population-level estimates of the relative risk, and even directionality, often were driven by a single sex. Specifically, the jointly observed longitudinal patterns were most strongly driven by men, and the strength of directionality was weaker in women, irrespective of the terminology used. There were many non-sex-specific diagnosis co-occurrences only found in men or women; these discrepancies were tied to differences in the relative risk as well as the timespan between two diagnoses. Using the diagnosis trajectory approach, we illustrate how the sex-specific statistics can be used in the search for differences in longitudinal patterns. In three case stories within respiratory disorders, environmental disorders, and sarcoidosis we highlighted how the methods applied in this article provide insight into gender-specific trends in diseases and disease progression. Taken together this is, to our knowledge, the most comprehensive analysis of sex incongruities in a single population presented so far. The study used a national patient registry, containing information from all private and public hospital admissions in Denmark, including all age groups. The population of Denmark is reasonably homogenous (~11.1% immigrants and descendants in 2015, of which 6.2% are from non-European countries)30. Thus, we expect that our observations are not confounded by race. Due to the nature of registry data, there are many latent factors for which we could not account. We have attempted to eliminate confounding from age, admission type, changing diagnostic criteria, and seasonal influence. One of the largest limitations of the study is the quality of data recording, and we cannot rule out that some of the incongruities could be explained by systematic errors. Nonetheless, the registry data are used for hospital reimbursement, undergoing yearly compensation adjustments, and thus the accuracy of most diagnoses is high31. We chose to only investigate the first occurrence of a diagnosis. It is extremely difficult to determine when a diagnosis is a recurrence, or just repeated due to the patient changing wards (or similar). Often, for nonacute conditions, there are waiting lists at the hospitals. Waiting times fluctuate over the 20-year period, due to political decisions on budgets, prioritization of disease areas like cancer, and new technologies. Hence, we did not include recurrences because it could potentially introduce bias and spurious findings. Other limitations regarding true disease state may be due to systematic gaps in medical evaluation, resulting in under- or overdiagnosis. This under- or overdiagnosis may result from a variety of causes, and the interaction between under-, overdiagnosis, and sex is of general interest, but not something we explored. We used two different terminologies to examine sex differences. The ICD-10 terminology reflects the current clinical practice, and how hospital admissions have been coded since 1994 in Denmark. In a tradeoff between power and specificity, we worked with ICD-10 at the third level. The GBD categories represent clinical entities, and sometimes follow different definitions. For instance, in our analysis we would not have identified the relationship between two of the underlying components of COPD, emphysema and bronchitis, and osteoporosis had we only used the GBD terminology. Nonetheless, the GBD categories also pointed to important findings that could not be identified using only the ICD-10 terminology, such as alcoholic cardiomyopathy. Some sex-specific co-occurrences may also be treatment provoked. There is an increasing focus on sex-mediated side effects, which may be due to physical, hormonal, or even genetic differences32,33. This area was not an aspect we could explore further either due to lack of full access to medication data. In the co-occurrence analysis, we did not apply prior knowledge to assign the direction of association, i.e. whether diagnosis A was a risk factor of B or vice versa, but used advanced statistical models to infer the most likely order. Many conditions develop asymptomatically or with diffuse symptoms. Symptoms will often, but not always, be identified prior to the underlying cause. As a consequence, some conditions are not necessarily discovered in the order they arise. However, we do not discern whether this relates to different etiology, differences in presentation of symptoms, genetics (e.g. the well-known fact that the Y-chromosome increases the risk for CVD in men34,35,36), differences in drug usage (e.g. higher rates of cytochrome P450 CYP3A substrate metabolism in women32), or biases in the healthcare system (e.g. frequency of contact). The main goal was to present an overall view of sex differences, irrespective of mechanistic molecular causes, links to differences in environmental exposures, or biases in the healthcare system. Menopause may also confound the results. This is not a condition that is recorded in the registry, but could be explored by selecting a fixed age. An earlier study found the average age to be 49 years, but the standard deviation was approximately 1537. Thus, selecting a fixed age could lead to a big bias, due to the large spread. This is better explored in another resource where it is explicitly recorded, e.g. the UK Biobank38. In cases with rare incidence or co-occurrence of diagnoses it can be difficult to obtain a proper estimate of the standard error (SE), leading to inflated intervals for incidence rates or relative risks. We have attempted to mitigate this by using a Bayesian Hierarchical Model (BHM). The BHM improves the estimate of the SE by pooling information across groups; an approach also used in the GBD and even clinical trials39,40. An argument against Bayesian statistics is often that the choice of priors may introduce biases in the estimates. Conversely, here we have chosen informative priors that center the estimates at no effect, and pool the standard deviation. Thus, instead of introducing an unwanted bias, we have actually made a more conservative estimate compared to traditional models, which often assume an uninformative prior41. Lastly, we have disclosed all investigations we have performed in the Supplementary Information and provide a rich set of aggregate data that can be used in future studies. We validated a number of the co-occurrences in the existing literature. The majority of the articles investigating the same conditions agreed with our findings. Nevertheless, this task is challenging as no other study is as broad as the one we present. Many studies do not investigate if there is a difference in sex-specific risks8. This omission included both meta-analysis, cohort-, and case-control studies. Sex is an important factor in epidemiological studies. In studies of single or few diagnoses with different cohorts, as well as the GBD, it is well documented that there are sex-mediated differences in the incidence rates3,39. Our results derived directly from hospital admissions for single disease incidence align well with previously reported differences, such as cancer, musculoskeletal disorders, and autoimmune diseases15,42,43. We found that the age of first hospital diagnosis was, on average, nearly always higher in women. To our knowledge, this has not been systematically studied before, and only reported for few specific areas, such as cardiovascular disorders44. A growing body of literature suggests that the reason for the delayed onset of cardiovascular disorders in women is due to the protective effect from estrogen44,45. While the age of first hospital diagnosis should not be confused with the age of onset, there is growing evidence that the protective role of estrogen is more widespread than previously thought. For instance, estrogen has been suggested to be a neuroprotective factor, which is in agreement with our findings concerning a later age of first hospital diagnosis in women for nervous system disorders46. Sex can also be a strong confounding factor when estimating diagnosis co-occurrence. To date no study has yet performed a systematic investigation of sex-specific diagnosis co-occurrences. We show how population-wide estimates of co-occurrence can be driven by a single sex, even when using a matched comparison group to negate other confounding factors. Furthermore, we demonstrated that the jointly observed longitudinal patterns are most strongly driven by men, and that the strength of directionality is weaker in women. The tendency to report sex-specific estimates is becoming increasingly standard practice, in particular due to the recognized fact that sex and gender considerations are vital in precision medicine47,48. We found many directional pairs that were unique to one sex. In this regard, our data demonstrate that disease co-occurrences related to cancers, digestive disorders, and injuries were overrepresented in men. This result indicates that men are more burdened by cancers, and complications, in the digestive system. In a temporal context, we also noticed that men are diagnosed with digestive system diseases (ch11) prior to neoplasms (ch2). This points towards a disparity in life-style-related diseases. Taken together, these findings suggest a bias in clinical practice, in which men with digestive system disorders are monitored more closely for neoplasms, whereas women are not. Men suffer more co-occurring injures. In women, both respiratory and musculoskeletal disorders were overrepresented in combination with symptoms and signs (chapter 18). One explanation for this could be that the prevalence of musculoskeletal disorders is higher in women, which leads to more unspecific symptoms, such as pain. A previous study found that women report musculoskeletal-related pain more often, and that this could be caused by a musculoskeletal sex difference49. In contrast to earlier large studies pooling data from multiple cohorts, we have been able to compare the timespan between temporal co-occurrences. We identified cases in which the temporal pairs had a different timespan in men and women. We note that in 72% of cases the diagnosis-free interval is longer for women than men. This finding aligns well with our earlier finding that the age of first hospital diagnoses is nearly always greater in women, and clearly shows how this widespread effect even translates into a temporal context. The diagnosis trajectory analysis showed an increased risk in women between several injuries, substance abuse, and complications of substance abuse that we speculate could be indicators of a gender bias reflecting domestic violence and consequences from drug abuse, in light of an earlier finding that found substance abuse to be a risk factor for nonfatal injuries in women50. The trajectory analysis also demonstrated a temporal relationship between nontoxic goiter, thyroid cancer, and secondary cancer in which men were at a higher risk. Women have a higher incidence of thyroid cancer, and male sex is described as a risk factor for malignant thyroid nodules. Earlier studies have found that the aggressive subtypes of thyroid cancer have a similar incidence in men and women, but that men often present at a more advanced stage51,52. This observation is an important finding from both epidemiological studies and this population-wide analysis and demonstrates the necessity of investigating multistep temporal associations. Furthermore, the analysis of obstructive lung disease and osteoporosis trajectories indicated patterns of severe under diagnosis. First, obstructive lung diseases were observed as risk factors for osteoporosis only in men. These results are in contrast to an earlier cross-sectional study, which found that sex did not modulate the association between airflow obstruction and osteoporosis53. However, the temporal relative risk may be more informative than the odds-ratio for the nontemporal co-occurrence. Moreover, obstructive lung diseases are underdiagnosed in women, a factor that can affect the estimates in a cohort study. Secondly, there was no directionality observed between osteoporosis without fracture and osteoporosis with fracture in men, but an elevated relative risk in both directions. Contrary, women were observed to have this pattern. This suggests that osteoporosis in men is not diagnosed prior to fracture, and therefore not managed. This could be part of the reason why mortality is higher in men with osteoporotic fracture, compared to women54. Lowered bone mineral density is a known adverse effect from corticosteroid therapy, a drug often used in the treatment of asthma and COPD. Possibly the lack of a connection for women could be due to the large difference in age of diagnosis for asthma, and therefore treatment is started later. In the case of COPD, corticosteroid therapy is only suggested for shorter symptomatic periods. However, COPD is a substantially underdiagnosed disease and two studies have estimated that 50–80% of COPD patients are undiagnosed55,56. Moreover, the COPD diagnosis is only confirmed by spirometry in 50% of the diagnosed patients57,58. Hence, patients receiving a diagnosis of COPD may be more symptomatically severe and would be expected to receive a more systemic steroid exposure. Recent data also suggest that moderate to severe emphysema itself is a risk factor for osteoporosis59. Another equally valid explanation could also be that the COPD phenotype carries risk of osteoporosis due to COPD associated frailty, smoking effects on bone metabolism, and limitations in physical activity. In addition, there is an interesting and emerging set of studies showing vitamin D receptor polymorphisms in patients with COPD and osteoporosis60. The relative impact of these factors would be greatest in men, given the baseline higher (>4 times) level of osteoporosis in women compared to men by age 50. Our case story regarding respiratory disorders also highlighted that some complications, such as bronchiectasis and emphysema, were different in men and women, a finding that may be relevant to the clinical assessment and management. Lastly, we found 16 cases where the directionality between two diagnoses was opposite. Some of these point to conditions in which men are not diagnosed prior to serious complications, such as the case with Crohn's disease and abscesses of anal and rectal regions. Other examples included IHD and PT, and pancreatitis and gallbladder and biliary diseases. One study found that pancreatitis in men was typically alcohol induced, while in women it was due to biliary problems, which could explain the reversed order of diagnosis61. We speculate that the observed difference between IHD and PT, in which IHD is a recognized risk factor, could possibly be due to an under diagnosis of IHD in women. This is further complicated by the fact that men and women develop different subtypes of IHD44. Taken together, our findings strongly suggest many disparities in a population with a uniform, one-payer-based healthcare, again underscoring the need for better sex-stratified medicine. Generally, many of our findings align well with larger meta studies, such as the GBD. Our study adds the dimension of the temporal aspect between disorders. In doing so, provide guidance in the design of future studies while also pointing to potential gaps in disease surveillance, diagnosis, and management. Nonetheless, a clear extension would be to perform this study in other cohorts, such as the UK Biobank although it is not comparable in size38. Including resources such as the UK Biobank or the emerging FinnGen and AllofUs data sets would potentially make it possible to identify genetic variants that could explain part of the discrepancy in disease progression. Study design and participants This was a population-based registry study based on the Danish National Patient Registry (DNPR). The DNPR covered all public and private hospital admissions in Denmark during 1994–2015, 6,909,676 patients (ICD-10 period only). The healthcare system in Denmark is universal, meaning everyone living in Denmark has free access to care. Patients can be tracked through the healthcare system using the Central Person Registry (CPR) number, which is a unique identifier assigned to every Danish citizen at birth or immigration (initiated in 1968). Visits to the general practitioner (GP) and private specialist clinics were not included in the data set. Admissions included inpatient (patients admitted to the hospital overnight), outpatient (patients not admitted to the hospital overnight), and emergency department contacts. Prior to 2002 there were both full-day inpatients and half-day inpatients. After 2002, the two groups were merged into one. Hence, we merged full-day inpatient and half-day inpatient from before 2002 into one group, inpatient. Inpatient records cover the time from admission of a patient to a hospital ward, until discharge to another ward or from the hospital. If a patient was discharged to another ward, the records were combined into one record. Likewise, if the patient was re-admitted to the hospital the next day the records were combined. The data also included open outpatient contacts. If a patient has regular follow-ups at the hospital the contact may remain open indefinitely as an outpatient. Since 2000, the DNPR has been used for reimbursement and the reimbursement rates are adjusted on an annual basis62. All referral diagnoses were excluded Referral diagnoses are used when patients are referred to another ward or department for further investigation based on a suspicion of a disorder. The ICD-10 is structured hierarchically with four levels. We studied diagnosis codes at the third ICD-10 level. We excluded ICD-10 diagnoses coming from chapters 20, 21, 22, as well as all codes specific to the Danish version of ICD-10. Codes specific for Denmark mainly describe length and weight at birth. We used the Chronic Condition Indicator to differentiate between acute and chronic ICD-10 codes (https://www.hcup-us.ahrq.gov/toolssoftware/chronic_icd10/chronic_icd10.jsp, last visited 13 July 2018). We performed a complementary analysis using the GBD categories, retrieved from http://ghdx.healthdata.org/record/global-burden-disease-study-2016-gbd-2016-causes-death-and-nonfatal-causes-mapped-icd-codes (last accessed 12 June 2018). The corresponding analysis is described in detail in Supplementary Note 1. Bayesian inference and model fitting Posterior distributions are summarized as a BCI. The BCI is the interval that spans the most credible values of the distribution, sometimes also referred to as the Highest Density Interval63. We defined the range of the BCI in this work to be the interval that spans 95% of the posterior distribution. Unless otherwise specified, the reported effect size is the median of the posterior distribution. We also defined a Region Of Equivalent Practice (ROPE) for the quantities of interest. A ROPE is a small region of values considered to be practically equivalent to a null value63. This is to ensure that the effect size of interest has a magnitude of clinical relevance, and is not just marginally different from the null value. All Bayesian models were made using the No-U-Turn sampler, a Hamiltonian Monte Carlo (HMC) variant, implemented in Stan v. 2.17.0, an open-source probabilistic programming language64,65. Unless otherwise specified, we ran four HMC chains, with default settings, for a total of 4000 samples, 2000 of them for warm-up to adapt HMC-specific hyper-parameters. The number of samples is significantly lower than what is usually drawn using e.g. Gibbs sampling. This is due to the nature of the NUTS-HMC algorithm, which converges faster65. We assessed convergence by inspecting the R-hat statistic, tree depth, and number of divergences66,67. The R-hat statistic describes the variation between chains. If all chains have arrived at the exact same posterior distribution for the given parameter, the R-hat will be 1. The tree depth plot is a method for assessing pathology in the HMC algorithm. If the tree depth goes to the maximal at every iteration past warm-up this indicates a random-walk behavior, which can lead to biases in the parameter estimates. A divergence happens when the model has numerical problems (e.g. division by zero, under flowing, or over flowing), and may indicate a problematic posterior or model that does not fit the data well. In this work, we conclude that a model has converged if and only if, (1) all R-hat values are below <1.1, (2) the tree depth is not at the maximal in any of the chains past warm-up, (3) there are zero divergences. Diagnosis incidence rates We examined all diagnoses at the ICD-10 level 3 that occurred in at least 100 patients during the 21-year period. The cutoff was set to avoid diagnoses used only very rarely or never. A number of diagnoses can only occur in one sex. For instance, hyperplasia of prostate can only occur in men. To identify sex-specific diagnoses we manually curated each diagnosis examined in this work, and classified whether the diagnoses were sex specific or not (Supplementary Data 8). A trained clinician oversaw and verified the curation. To estimate the incidence, we fitted a hierarchical Bayesian Poisson model of the form shown in Eq. (1), $$y_i\sim {\mathrm{Poisson}}({\mathrm{exp}}(\eta _i))$$ in which ηi is a linear combination of the strata for every diagnosis as shown in Eq. (2), $$\eta _i = \beta _{i,0} + \beta _{i,{\mathrm {Age}}} \ast x_{i,{\mathrm {age}}} + \beta _{i,{\mathrm {sex}}} \ast x_{i,sex} + {\mathrm{log}}({\mathrm{offset}}_{{i}})$$ in which the age is one of the 21 5-year interval groups defined in the European Standard Population 2013 (Eurostat)24, the sex is a binary indicator, and the offset is the population at risk. To complete the model, we specify a set of priors on the coefficients shown in Eqs. (3–5) $$\beta _{i,0}\sim N\left( {0,\,\sigma _0} \right),$$ $$\beta _{i,{\mathrm {sex}}}\sim N\left( {0,\sigma _{{\mathrm {sex}}}} \right),$$ $$\beta _{{{i}},{{{\mathrm {age}}}}}\sim N\left( {0,\sigma _{{\mathrm {age}}}} \right)$$ in which βage represents a coefficient for each of the 21 age groups, with an individual prior, σage, on each coefficient. We defined the prior on the scales of the coefficients as shown in Eqs. (6–8), $$\sigma _0\sim N_ + \left( {0,3} \right),$$ $$\sigma _{{\mathrm {sex}}}\sim N_ + \left( {0,0.5} \right),$$ $$\sigma _{{\mathrm {age}}}\sim N_ + \left( {0,1} \right)$$ in which N+ is the truncated normal distribution. For the coefficients, we choose weakly informative priors, with the exception of β0. This was due to the large number of people at risk, i.e. the offset. Therefore, we choose a less informative prior for the scale. From the fitted model, we simulated the number of cases for each diagnosis, displayed in Eq. (9), $$\hat y_i = {\mathrm {Poisson}}(\hat \eta _i)$$ in which the mean, \(\hat \eta\), was equal to the estimated coefficients, Eq. (10), $$\hat \eta _i = \hat \beta _{i,0} + \hat \beta _{i,{\mathrm {Age}}} \ast x_{i,{\mathrm {age}}} + \hat \beta _{i,{\mathrm {sex}}} \ast x_{i,{\mathrm {sex}}} + {\mathrm{log}}({\mathrm {offset}}_i).$$ From the fitted coefficients, we calculated the age-adjusted IR (AIR) using the European Standard Population 201324, as shown in Eq. (11) $${\mathrm {AAIR}} = \frac{{\mathop {\sum }\nolimits_i p_i \ast N_i}}{{\mathop {\sum }\nolimits_i N_i}}$$ in which pi is the age-specific rate, and Ni is the population of age group i, according to the European Standard Population 2013. Rates were calculated for all, men, and women, using the European Standard Population 2013, age-adjusted rates are per 100,000. If the relative difference is greater than 0.1, we conclude that there is a difference in incidence rate. The relative difference is defined in Eq. (12), $$d = \frac{{{\mathrm {AAIR}}_{{\mathrm {men}}} - {\mathrm {AAIR}}_{{\mathrm {women}}}}}{{({\mathrm {AAIR}}_{{\mathrm {men}}} + {\mathrm {AAIR}}_{{\mathrm {women}}})/2}},$$ where a positive number will indicate a higher AIR in men, and a negative number a lower AIR in women. Age of first hospital diagnosis We calculated the average age of diagnosis for a given ICD-10 code by calculating the mean across all cases in the NPR separately for men and women. We identified differences using the Welch t test. P values were adjusted using the stringent Benjamini-Hochberg (BH) procedure. We report the difference in means. We estimated the chapter-wise difference by calculating the weighted mean. We examined all pairs of diagnosis that occurred in more than 100 individuals. The cutoff was set to ensure that the combination of two diagnoses is sufficiently prevalent to be of interest. The time resolution of the NPR is one day, and any diagnoses given on the same day were not counted. Only the first occurrence of a diagnosis was considered. The time of diagnosis was taken as the time the patient was discharged. If the patient had not yet been discharged, the date of the last diagnosis was used instead. ICD-10 has a dual coding system, the dagger−asterisk system. The asterisk represents the symptom or manifestation of disease and the dagger indicates the etiology of the disease. We identified these pairs and excluded them from subsequent analysis. To negate the most common confounding factors, we sampled a matched comparison group. For any given combination of diagnoses, A and B, we fixate A as the Exposure (Ex) and B as the Event (Ev) to estimate the time-resolved relative risk, RR (A → B), and directionality, Pr (A → B). For every exposed patient, we sampled five nonexposed cases matched to (i) be in the same age group, (ii) have a hospital discharge from the same type of encounter (inpatient, outpatient, emergency department), (iii) be discharged at the same month of the same year, ±3 months. An earlier study found that the hospital encounter is a confounding factor in as much as 15% of the identified diagnosis co-occurrences from a study in the NPR20. Moreover, modifying disease definitions and diagnostic criteria may affect both incidence and prevalence25. Previous studies using NPR found that changes in diagnostic criteria increased hospitalization rate for AMI, and increased the prevalence and shifted the age of diagnosis for autism26,27. We negate this effect by matching the encounter year. Lastly, by matching the encounter month we diminish seasonal variation that may influence the incidence of, for instance, infectious diseases. The relative risk is not symmetrical, i.e. RR(A → B) ≠ RR(B → A), and thus we repeat the process of selecting matched controls by fixing B as the exposure, and A as the event. This effectively doubles the number of combinations of diagnoses examined. HMC models are computationally expensive to fit. Consequently, prior to running the full hierarchical Bayesian model using Stan we applied a prefilter by calculating the 95% CI of the relative risk using the formula provided by Morris and Gardner68. The relative risk is given in Eq. (13), $${\mathrm {RR}} = \frac{{N_{{\mathrm A} \to {\mathrm B}}/(N_{{\mathrm A} \to {\mathrm B}} + N_{\mathrm A})}}{{N_{\mathrm B}/(N_{\mathrm B} + N_0)}}$$ and the standard error of the log-transformed RR is shown in Eq. (14), $${\mathrm {SE}}\left( {\log {\mathrm {RR}}} \right) = \surd \left( {\frac{1}{{N_{{\mathrm {A}} \to {\mathrm {B}}}}} + \frac{1}{{N_{{\mathrm {A}} \to {\mathrm {B}}} + N_{\mathrm {A}}}} + \frac{1}{{N_{\mathrm {B}}}} - \frac{1}{{N_{\mathrm {B}} + N_0}}} \right)$$ hence the CI of the RR is given in Eq. (15), $$\exp \left( {\log {\mathrm {RR}} \pm \left( {N_{1 - \alpha /2} \ast {\mathrm {SE}}\left( {\log R} \right)} \right)} \right).$$ We calculated CI separately for men, women, and the two sexes combined. Only pairs of diagnoses in which either the lower bound of the CI RR(A → B) or RR(B → A) excluded 1.01 were included in the subsequent analysis, that is we only studied diagnosis co-occurrences in which the exposure increased the risk of the subsequent event by more than 1%. We note that we do not perform any correction for multiple testing. Consequently, the number of false positives will be high. Additionally, in cases with a low number of patients, the estimate of the standard error will be inaccurate. However, in the following part we describe a BHM to refine the estimate of the relative risk. We refine the estimate of the temporal relative risk and directionality between pairs of diagnoses by employing a hierarchical Bayesian model. For each exposure, i, and the event observed together with this exposure, j, we describe the relationship using a Poisson model following Eq. (16), $$y_{ij}\sim {\mathrm {Poisson}}({\mathrm{exp}}(\eta _{ij})),$$ where ηij is a linear combination shown in Eq. (17), $$\eta _{ij} = \beta _{ij,0} + \beta _{ij,{\mathrm {Ex}}} \ast x_{ij,{\mathrm {Ex}}} + \beta _{ij,{\mathrm {Ev}}} \ast x_{ij,{\mathrm {Ev}}} + \beta _{ij,{\mathrm {ExEv}}} \ast x_{ij,{\mathrm {ExEv}}} + {\mathrm{log}}({\mathrm {offset}}_{ij})$$ in which xEx and xEv are indicator variables for the exposure and event, respectively. xExEv is the interaction between the exposure and event. The offset is the number of people within the group. We further estimated sex-specific relative risks by introducing a sex term and interaction terms between Ex, Ev, and Sex shown in Eq. (18). $$\begin{array}{l}\eta _{ij} = \beta _{ij,0} + \beta _{ij,{\mathrm {Sex}}} \ast x_{ij,{\mathrm {Sex}}} + \beta _{ij,{\mathrm {Ex}}} \ast x_{ij,{\mathrm {Ex}}} + \beta _{ij,{\mathrm {Ev}}} \ast x_{ij,{\mathrm {Ev}}} + \beta _{ij,{\mathrm {ExSex}}} \ast x_{ij,{\mathrm {ExSex}}}\\ + \beta _{ij,{\mathrm {EvSex}}} \ast x_{ij,{\mathrm {EvSex}}} + \beta _{ij,{\mathrm {ExEv}}} \ast x_{ij,{\mathrm {ExEv}}} + \beta _{ij,{\mathrm {EvExSex}}} \ast x_{ij,{\mathrm {EvExSex}}} + {\mathrm{log}}({\mathrm {offset}}_{ij})\end{array}.$$ To complete the model, we specify a set of priors for the regression coefficients, Eqs. (19)–(26) $$\beta _{ij,0}\sim N\left( {0,\sigma _0} \right),$$ $$\beta _{ij,{\mathrm {Ex}}}\sim N\left( {0,\sigma _{{\mathrm {Ex}}}} \right),$$ $$\beta _{ij,{\mathrm {Ev}}}\sim N\left( {0,\sigma _{{\mathrm {Ev}}}} \right),$$ $$\beta _{ij,{\mathrm {EvEx}}}\sim N\left( {0,\sigma _{{\mathrm {EvEx}}}} \right),$$ $$\beta _{ij,{\mathrm {sex}}}\sim N\left( {0,\sigma _{{\mathrm {sex}}}} \right),$$ $$\beta _{ij,{\mathrm {ExSex}}}\sim N\left( {0,\sigma _{{\mathrm {ExSex}}}} \right),$$ $$\beta _{ij,{\mathrm {EvSex}}}\sim N\left( {0,\sigma _{{\mathrm {EvSex}}}} \right),$$ $$\beta _{i,j,{\mathrm {ExEvSex}}}\sim N\left( {0,\sigma _{{\mathrm {ExEvSex}}}} \right)$$ and weakly informative priors on the scale of each coefficient, Eqs. (27)–(32) $$\sigma _{\mathrm {B}}\sim N_ + \left( {0,2} \right),$$ $$\sigma _{{\mathrm {ExEv}}}\sim N_ + \left( {0,2} \right),$$ $$\sigma _{{\mathrm {EvSex}}}\sim N_ + \left( {0,2} \right),$$ $$\sigma _{{\mathrm {ExSex}}}\sim N_ + \left( {0,2} \right),$$ $$\sigma _{{\mathrm {ExEvSex}}}\sim N_ + \left( {0,2} \right)$$ in which N+ is the truncated normal distribution. We have removed confounding from age, admission type, admission year, and admission month by selecting five matched patients. Thus, we have not included these terms in the model, as the goal is to study the effects from sex. The prior values chosen for the interaction terms favors effects close to zero. Hence, by prior design we expect that only few of the pairs investigated will occur together more than expected by chance. In addition, the hierarchical structure imposes shrinkage on the coefficients and helps inform coefficient estimates across pairs where counts may be low63. Using the posterior distribution, we estimate the directionality and relative risks. We simulated the number of patients who had been diagnosed in the order A → B and B → A, and calculated the probability of observing two diagnoses in a specific direction using the formula specified in Eq. (33), $$\Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right) = \frac{{{\mathrm {N}}_{{\mathrm {A}} \to {\mathrm {B}}}}}{{N_{{\mathrm {A}} \to {\mathrm {B}}} + N_{{\mathrm {B}} \to {\mathrm {A}}}}}.$$ The probability of Pr(A → B) is thus as specified in Eq. (34), $$\Pr \left( {{\mathrm{B}} \to {\mathrm{A}}} \right) = 1 - \Pr \left( {{\mathrm{A}} \to {\mathrm {B}}} \right).$$ We define a ROPE in the interval (0.49, 0.51). If the BCI excludes these values, we conclude that the pair of diagnoses has a preferred statistical direction. This probability can also be interpreted quantitatively. For instance, \(\Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right) = 0.8\) would correspond to A being diagnosed before B in four out of five cases. Likewise, from the posterior distribution, we calculate an adjusted relative risk using the Cochran−Mantel−Haenszel method shown in Eq. (35), $${\mathrm {RR}}\left( {{\mathrm {A}} \to {\mathrm {B}}} \right) = \frac{{N_{{\mathrm {m}},{\mathrm {A}} \to {\mathrm {B}}} \ast \frac{{\left( {N_{{\mathrm {m}},{\mathrm {B}}} + N_{{\mathrm {m}},0}} \right)}}{{N_{\mathrm {m}}}} + N_{{\mathrm {f}},{\mathrm {A}} \to {\mathrm {B}}} \ast \frac{{\left( {N_{{\mathrm {f}},{\mathrm {B}}} + N_{{\mathrm {f}},0}} \right)}}{{N_{\mathrm {f}}}}}}{{N_{{\mathrm {m}},{\mathrm {B}}} \ast \frac{{N_{{\mathrm {m}},{\mathrm {A}} \to {\mathrm {B}}} + N_{{\mathrm {m}},{\mathrm {A}}}}}{{N_{\mathrm {m}}}}\mathop {\sum}\nolimits_i { + N_{{\mathrm {f}},{\mathrm {B}}} \ast \frac{{N_{{\mathrm {f}},{\mathrm {A}} \to {\mathrm {B}}} + N_{{\mathrm {f}},{\mathrm {A}}}}}{{N_{\mathrm {f}}}}N_{i,{\mathrm {B}}}} }}$$ and the male sex-specific relative risk is as specified in Eq. (36), $${\mathrm{RR}}\left( {{\mathrm{A}} \to {\mathrm {B}}} \right)_{\mathrm {m}} = \frac{{\frac{{N_{{\mathrm {m}},{\mathrm {A}} \to {\mathrm {B}}}}}{{N_{{\mathrm {m}},{\mathrm {A}} \to {\mathrm {B}}} + N_{{\mathrm {m}}, {\mathrm {A}}}}}}}{{\frac{{N_{{\mathrm {m}},{\mathrm {B}}}}}{{N_{{\mathrm {m}},{\mathrm {B}}} + N_{{\mathrm {m}},0}}}}}$$ and likewise for women. In this study, we are not interested in identifying inverse comorbidities, i.e. cases where an exposure reduces the risk of a later event. Hence, we define a ROPE that the lower bound of the RR should exclude 1.1. This corresponds to at least a 10% increase in risk. If a combination of diagnoses has a preferred direction and the RR lower bound of that direction excludes 1.1, we say that the co-occurrence was a directional pair. To compare the relative risk between men and women we subtract the posterior distribution of RRmen and RRwomen from each other. If the resulting BCI excludes (−0.1, 0.1), i.e. there should be a difference in risk of at least 10% or more, we note that there is a significant difference in relative risk between men and women. To compare the difference in directionality strength, we inspected the median of the posterior distribution for the joint direction, and subtracted the posterior distribution of direction men and women, respectively, as shown in Eqs. (37) and (38). $$\omega _{{\mathrm {Men}}} = \Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right)_{{\mathrm {joint}}} - \Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right)_{{\mathrm {Men}}},$$ $$\omega _{{\mathrm {Women}}} = \Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right)_{{\mathrm {joint}}} - \Pr \left( {{\mathrm {A}} \to {\mathrm {B}}} \right)_{{\mathrm {Women}}}.$$ We tested if there was a difference in the variance of the distribution using the F-test. This requires that the distributions are normally distributed. We confirmed this by visual inspection of the density plots (Supplementary Fig. 1). We report the ratio between variances (men compared to women) as the effect size, and the 95% CI. Literature validation and comparison of co-occurrences with a higher risk in men or women was performed by searching PubMed for articles mentioning either diseases, or, a more specific relevant term. Articles matching were inspected for cohort sizes and to identify any sex-specific estimate of risk or mentions of sex as a risk factor. Difference in time between diagnosis The time between two diagnoses is computed across all patients that have been diagnosed with both diagnoses. We only look into the directional pairs, defined by an elevated relative risk and preferred direction. We notice that, due to the long follow-up, the distributions have a heavy tail, and are thus not normally distributed. Therefore, we used the two-sided Mann−Whitney U test. Only directional pairs found in both men and women are investigated. Effect sizes are reported as the median difference in time, and the p value is corrected for multiple testing using the BH method. A median difference in time less than zero indicates that the disease transition progress faster in women, and likewise a median difference in time greater than zero indicate that the progression is faster in men. We pieced together directional pairs of diagnosis to form multistep trajectories18,19. For every pairwise co-occurrence, we iteratively added a diagnosis and counted the number of people following the trajectory in the population. In this particular study, we only investigated trajectories followed by more than 100 people, with a minimum of four diagnoses. Using the disease trajectory framework, we studied two categories of trajectories. First, we investigated the directional pairs that had the biggest difference in relative risk between men and women. Second, we selected two diseases, obstructive lung diseases and osteoporosis, which prior studies had found to be underdiagnosed in women and men, respectively. The trajectories were visualized as networks, in which each node represents a diagnosis and the connection between two nodes, the edge, represents a directional link between two diagnoses. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. The study was approved by the Danish Data Protection Agency (ref: 2015-54-0939 and SUND-2017-57) and Danish Health Authority (ref: FSEID-00001627 and FSEID-00003092). Permission to access and analyze data can be obtained following approval from Danish Data Protection Agency and the Danish Health Authority. A reporting summary for this article is available as a Supplementary Information file. Stan (v 2.17)64, Python (v2.7), and R (v.3.1.3) was used for statistical analysis. Due to privacy concerns, the provided Supplementary Data only contain estimates for diagnosis and co-occurrences when it has been assigned to at least five men and women. Baggio, G., Corsini, A., Floreani, A., Giannini, S. & Zagonel, V. Gender medicine: a task for the third millennium. Clin. Chem. Lab. Med. 51, 713–727 (2013). Regitz-Zagrosek, V. Sex and gender differences in health. EMBO Rep. 13, 596–603 (2012). Franconi, F., Sanna, M., Straface, E., Chessa, R. & Rosano, G. Sex and Gender Aspects in Clinical Medicine. Pathophysiology (Springer, New York, 2012). World Health Organization. WHO gender policy: integrating gender perspectives in the work of WHO. http://origin.who.int/gender-equity-rights/knowledge/a78322/en/ (Accessed 22 February 2018). (2002). Siddiqui, R. A. et al. X chromosomal variation is associated with slow progression to AIDS in HIV-1-infected women. Am. J. Hum. Genet. 85, 228–239 (2009). Liu, L. Y., Schaub, M. A., Sirota, M. & Butte, A. J. Sex differences in disease risk from reported genome-wide association study findings. Hum. Genet. 131, 353–364 (2012). Cereda, E. et al. Dementia in Parkinson's disease: is male gender a risk factor? Park. Relat. Disord. 26, 67–72 (2016). Ortona, E., Delunardo, F., Baggio, G. & Malorni, W. A sex and gender perspective in medicine: a new mandatory challenge for human health. Ann. Ist. Super. Sanita 52, 146–148 (2016). Caenazzo, L., Tozzo, P. & Baggio, G. Ethics in women's health: a pathway to gender equity. Adv. Med. Ethics 2, 5 (2015). Zakiniaeiz, Y., Cosgrove, K. P., Potenza, M. N. & Mazure, C. M. Balance of the sexes: addressing sex differences in preclinical research. Yale J. Biol. Med. 89, 255–259 (2016). Shader, R. I. More on women's health, gender medicine, and the complexities of personalized medicine. Clin. Ther. 38, 233–234 (2016). Mcgregor, A. J. The impact sex-differences research can have on women's health. Clin. Ther. 38, 1–2 (2015). Mehta, L. S. et al. Acute myocardial infarction in women: a scientific statement from the American Heart Association. Circulation 133, 916–947 (2016). Regitz-Zagrosek, V. Therapeutic implications of the gender-specific aspects of cardiovascular disease. Nat. Rev. Drug. Discov. 5, 425–438 (2006). Eaton, W. W., Rose, N. R., Kalaydjian, A., Pedersen, M. G. & Mortensen, P. B. Epidemiology of autoimmune diseases in Denmark. J. Autoimmun. 29, 1–9 (2007). Willson, T., Nelson, S. D., Newbold, J., Nelson, R. E. & LaFleur, J. The clinical epidemiology of male osteoporosis: a review of the recent literature. Clin. Epidemiol. 7, 65–76 (2015). Ancochea, J. et al. Infradiagnóstico de la enfermedad pulmonar obstructiva crónica en mujeres: cuantificación del problema, determinantes y propuestas de acción. Arch. Bronconeumol. 49, 223–229 (2013). Beck, M. K., Westergaard, D., Jensen, A. B., Groop, L. & Brunak, S. Temporal order of disease pairs affects subsequent disease trajectories: the case of diabetes and sleep apnea. Biocomput 2017 22, 380–389 (2017). Beck, M. K. et al. Diagnosis trajectories of prior multi-morbidity predict sepsis mortality. Sci. Rep. 6, 36624 (2016). Jensen, A. B. et al. Temporal disease trajectories condensed from population-wide registry data covering 6.2 million patients. Nat. Commun. 5, 4022 (2014). Bagley, S. C. & Altman, R. B. Computing disease incidence, prevalence and comorbidity from electronic medical records. J. Biomed. Inform. 63, 108–111 (2016). Grimes, D. A. & Schulz, K. F. Bias and causal associations in observational research. Lancet 359, 248–252 (2002). Hidalgo, C. A., Blumm, N., Barabási, A. L. & Christakis, N. A. A Dynamic network approach for the study of human phenotypes. PLoS Comput. Biol. 5, e1000353 (2009). Eurostat Task force. Revision of the European Standard Population. http://ec.europa.eu/eurostat/documents/3859598/5926869/KS-RA-13-028-EN.PDF/e713fa79-1add-44e8-b23d-5e8fa09b3f8f (accessed 29 November 2017) (2013). Doust, J. et al. Guidance for modifying the definition of diseases. JAMA Intern. Med. 177, 1020 (2017). Parner, E. T., Schendel, D. E. & Thorsen, P. Autism prevalence trends over time in Denmark: changes in prevalence and age at diagnosis. Arch. Pediatr. Adolesc. Med. 162, 1150–1156 (2008). Abildstrom, S. Z., Rasmussen, S. & Madsen, M. Changes in hospitalization rate and mortality after acute myocardial infarction in Denmark after diagnostic criteria and methods changed. Eur. Heart J. 26, 990–995 (2005). Jørgensen, N. R. et al. The prevalence of osteoporosis in patients with chronic obstructive pulmonary disease: a cross sectional study. Respir. Med. 101, 177–185 (2007). Barber, R. M. et al. Healthcare Access and Quality Index based on mortality from causes amenable to personal health care in 195 countries and territories, 1990–2015: a novel analysis from the Global Burden of Disease Study 2015. Lancet 390, 231–266 (2017). Denmark in Figures. Denmark in Figures. http://www.dst.dk/en/Statistik/Publikationer/VisPub?cid=19006 (accessed 21 July 2017) (2015). Thygesen, S. K., Christiansen, C. F., Christensen, S., Lash, T. L. & Sørensen, H. T. The predictive value of ICD-10 diagnostic coding used to assess Charlson comorbidity index conditions in the population-based Danish National Registry of Patients. Bmc Med. Res. Methodol. 11, 83 (2011). Nicolson, T. J., Mellor, H. R. & Roberts, R. R. A. Gender differences in drug toxicity. Trends Pharmacol. Sci. 31, 108–114 (2010). Spoletini, I., Vitale, C., Malorni, W. & Rosano, G. M. C. in Sex and Gender Differences in Pharmacology (ed. Regitz-Zagrosek, V.) 91–105 (Springer, Berlin, Heidelberg, 2013). https://doi.org/10.1007/978-3-642-30726-3_5 Charchar, F. J. et al. Association of the human Y chromosome with cholesterol levels in the general population. Arterioscler. Thromb. Vasc. Biol. 24, 308–312 (2004). Charchar, F. J. et al. Inheritance of coronary artery disease in men: an analysis of the role of the y chromosome. Lancet 379, 915–922 (2012). Charchar, F. J., Tomaszewski, M., Strahorn, P., Champagne, B. & Dominiczak, A. F. Y is there a risk to being male? Trends Endocrinol. Metab. 14, 163–168 (2003). Boldsen, J. L. & Jeune, B. Distribution of age at menopause in two danish samples. Hum. Biol. 62, 291–300 (1990). Sudlow, C. et al. UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015). Vos, T. et al. Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet 388, 1545–1602 (2016). Quintana, M., Viele, K. & Lewis, R. J. Bayesian analysis: using prior information to interpret the results of clinical trials. JAMA 318, 1605–1606 (2017). Greenland, S. Bayesian perspectives for epidemiological research: I. Foundations and basic methods. Int. J. Epidemiol. 35, 765–775 (2006). Fitzmaurice, C. et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 32 cancer groups, 1990 to 2015. JAMA Oncol. 3, 524 (2017). Smith, E. et al. The global burden of other musculoskeletal disorders: estimates from the Global Burden of Disease 2010 study. Ann. Rheum. Dis. 73, 1462–1469 (2014). Regitz-Zagrosek, V. & Kararigas, G. Mechanistic pathways of sex differences in cardiovascular disease. Physiol. Rev. 97, 1–37 (2016). Regitz-Zagrosek, V. in Sex and Gender Aspects in Clinical Medicine (eds Oertelt-Prigione, S. & Regitz-Zagrosek, V.) 17–44 (Springer-Verlag London, 2012). https://doi.org/10.1007/978-0-85729-832-4 Arevalo, M.-A., Azcoitia, I. & Garcia-Segura, L. M. The neuroprotective actions of oestradiol and oestrogen receptors. Nat. Rev. Neurosci. 16, 17–29 (2014). Legato, M. J., Johnson, P. A. & Manson, J. E. Consideration of sex differences in medicine to improve health care and patient outcomes. JAMA 316, 1865 (2016). Schiebinger, L., Leopold, S. S. & Miller, V. M. Editorial policies for sex and gender analysis. Lancet 388, 2841–2842 (2016). Rollman, G. B. & Lautenbacher, S. Sex differences in musculoskeletal pain. Clin. J. Pain 17, 20–24 (2001). Kyriacou, D. N. et al. Risk factors for injury to women from domestic violence. N. Engl. J. Med. 341, 1892–1898 (1999). Lawrence, W. & Kaplan, B. J. Diagnosis and management of patients with thyroid nodules. J. Surg. Oncol. 80, 157–170 (2002). Rahbari, R., Zhang, L. & Kebebew, E. Thyroid cancer gender disparity. Future Oncol. 6, 1771–1779 (2010). Sin, D. D., Man, J. P. & Man, S. F. P. F. P. The risk of osteoporosis in Caucasian men and women with obstructive airways disease. Am. J. Med. 114, 10–14 (2003). Center, J. R., Nguyen, T. V., Schneider, D., Sambrook, P. N. & Eisman, J. A. Mortality after all major types of osteoporotic fracture in men and women: an observational study. Lancet 353, 878–882 (1999). Çolak, Y., Afzal, S., Nordestgaard, B. G., Vestbo, J. & Lange, P. Prognosis of asymptomatic and symptomatic, undiagnosed COPD in the general population in Denmark: a prospective cohort study. Lancet Respir. Med. 5, 426–434 (2017). Martinez, C. H. et al. Undiagnosed obstructive lung disease in the United States. Associated factors and long-term mortality. Ann. Am. Thorac. Soc. 12, 1788–1795 (2015). Arne, M. et al. How often is diagnosis of COPD confirmed with spirometry? Respir. Med. 104, 550–556 (2010). Koefoed, M. M., Christensen, RdePont, Søndergaard, J. & Jarbøl, D. E. Lack of spirometry use in Danish patients initiating medication targeting obstructive lung disease. Respir. Med. 106, 1743–1748 (2012). Bon, J. et al. Radiographic emphysema, circulating bone biomarkers, and progressive bone mineral density loss in smokers. Ann. Am. Thorac. Soc. 15, 615–621 (2018). Kim, S. W. et al. Association between vitamin D receptor polymorphisms and osteoporosis in patients with COPD. Int. J. Chron. Obstruct. Pulmon. Dis. 10, 1809 (2015). Lankisch, P. G., Assmus, C., Lehnick, D., Maisonneuve, P. & Lowenfels, A. B. Acute pancreatitis: does gender matter? Dig. Dis. Sci. 46, 2470–2474 (2001). Ankjær-Jensen, A., Rosling, P. & Bilde, L. Variable prospective financing in the Danish hospital sector and the development of a Danish case-mix system. Health Care Manag. Sci. 9, 259–268 (2006). Kruschke, J. K. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition, https://doi.org/10.1016/C2012-0-00477-2 (2014). Carpenter, B. et al. Stan: a probabilistic programming language.J. Stat. Softw. 76, 1–32 (2017). Hoffman, M. D. & Gelman, A. The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 15, 30 (2014). MathSciNet MATH Google Scholar Betancourt, M. Diagnosing biased inference with divergences. http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html (accessed 17 April 2017). Gelman, A. & Rubin, D. B. Inference from iterative simulation using multiple sequences. Stat. Sci. 7, 457–472 (1992). Morris, J. A. & Gardner, M. J. Calculating confidence intervals for relative risks (odds ratios) and standardised ratios and rates. Br. Med. J. (Clin. Res. Ed.). 296, 1313–1316 (1988). We would like to acknowledge funding from the Novo Nordisk Foundation (grant agreements NNF14CC0001 and NNF17OC0027594). Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, 2200, Copenhagen, Denmark David Westergaard, Pope Moseley, Freja Karuna Hemmingsen Sørup & Søren Brunak Unit of Clinical Pharmacology, Roskilde University Hospital, 4000, Roskilde, Denmark Freja Karuna Hemmingsen Sørup Institute for Genomics and Bioinformatics and Department of Computer Science, University of California, Irvine, CA, 92697, USA Pierre Baldi David Westergaard Pope Moseley Søren Brunak D.W. and S.B. conceived the study. S.B. obtained the funding. D.W. and S.B. performed the literature search, figures, study design, and data analysis. D.W., F.K.H.S., P.M., P.B., and S.B. contributed to data interpretation. D.W. and S.B. wrote the initial draft, and D.W., F.K.H.S., P.M., P.B. and S.B. contributed to the final article. Correspondence to Søren Brunak. The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Westergaard, D., Moseley, P., Sørup, F.K.H. et al. Population-wide analysis of differences in disease progression patterns in men and women. Nat Commun 10, 666 (2019). https://doi.org/10.1038/s41467-019-08475-9 Predictors of failure on second-line antiretroviral therapy with protease inhibitor mutations in Uganda Hellen Musana Jude Thaddeus Ssensamba Francis Ssali AIDS Research and Therapy (2021) Characterisation, identification, clustering, and classification of disease A. J. Webster K. Gaitskell R. Clarke Scientific Reports (2021) Gender differences in time to first hospital admission at age 60 in Denmark, 1995–2014 Andreas Höhn Anna Oksuzyan Rosie Seaman European Journal of Ageing (2021) Gender disparities in clinical practice: are there any solutions? Scoping review of interventions to overcome or reduce gender bias in clinical practice Lorena Alcalde-Rubio Ildefonso Hernández-Aguado Elisa Chilet-Rosell International Journal for Equity in Health (2020) Medical cannabis use in the United States: a retrospective database study V. Kishan Mahabir Jamil J. Merchant Alisha Garibaldi Journal of Cannabis Research (2020) Reviews & Analysis Editorial Values Statement Journal Impact Editors' Highlights Top 50 Articles Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Convergence to a self-normalized G-Brownian motion Zhengyan Lin1 & Li-Xin Zhang ORCID: orcid.org/0000-0002-0904-36781 G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion. Its quadratic variation process is also a continuous process with independent and stationary increments. We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Brownian motion self-normalized by its quadratic variation. To prove the self-normalized central limit theorem, we also establish a new Donsker's invariance principle with the limit process being a generalized G-Brownian motion. Let {X n ;n≥1} be a sequence of independent and identically distributed random variables on a probability space . Set \(S_{n}=\sum _{j=1}^{n} X_{j}\). Suppose EX 1=0 and \(EX_{1}^{2}=\sigma ^{2}>0\). The well-known central limit theorem says that $$ \frac{S_{n}}{\sqrt{n}}\overset{d}\rightarrow N\left(0,\sigma^{2}\right), $$ or, equivalently, for any bounded continuous function ψ(x), $$ E\left[\psi\left(\frac{S_{n}}{\sqrt{n}}\right)\right]\rightarrow E\left[\psi(\xi)\right], $$ where ξ∼N(0,σ 2) is a normal random variable. If the normalization factor \(\sqrt {n}\) is replaced by \(\sqrt {V_{n}}\), where \(V_{n}=\sum _{j=1}^{n} X_{j}^{2}\), then $$ \frac{S_{n}}{\sqrt{V_{n}}}\overset{d}\rightarrow N(0,1). $$ Giné et al. (1997) proved that (3) holds if and only if EX 1=0 and $$ {\lim}_{x\rightarrow \infty} \frac{x^{2}P\left(|X_{1}|\ge x\right)}{EX_{1}^{2}I\{|X_{1}|\le x\}}=0. $$ The result (3) is refered to as the self-normalized central limit theorem. The purpose of this paper is to establish the self-normalized central limit theorem under the sub-linear expectation. The sub-linear expectation, or also called G-expectation, is a nonlinear expectation generalizing the notions of backward stochastic differential equations, g-expectations, and provides a flexible framework to model non-additive probability problems and the volatility uncertainty in finance. Peng (2006, 2008a,b) introduced a general framework of the sub-linear expectation of random variables and the notions of the G-normal random variable, G-Brownian motion, independent and identically distributed random variables, etc., under the sub-linear expectation. The construction of sub-linear expectations on the space of continuous paths and discrete-time paths can also be founded in Yan et al. (2012) and Nutz and van Handel (2013). For basic properties of the sub-linear expectation, one can refer to Peng (2008b, 2009, 2010a etc.). For stochastic calculus and stochastic differential equations with respect to a G-Brownian motion, one can refer to Li and Peng (2011), Hu et al. (2014a, b), etc., and a book by Peng (2010a). The central limit theorem under the sub-linear expectation was first established by Peng (2008b). It says that (2) remains true when the expectation E is replaced by a sub-linear expectation \(\hat {\mathbb {E}}\) if {X n ;n≥1} are independent and identically distributed under \(\hat {\mathbb {E}}\), i.e., $$ \frac{S_{n}}{\sqrt{n}}\overset{d}\rightarrow \xi~\text{under}~\hat{\mathbb{E}}, $$ where ξ is a G-normal random variable. In the classical case, when \(\textsf {E}[X_{1}^{2}]\) is finite, (3) follows from the cental limit theorem (1) directly by Slutsky's lemma and the fact that $$ \frac{V_{n}}{n}\overset{P}\rightarrow \sigma^{2}. $$ The latter is due to the law of large numbers. Under the framework of the sub-linear expectation, \(\frac {V_{n}}{n}\) no longer converges to a constant. The self-normalized central limit theorem cannot follow from the central limit theorem (5) directly. In this paper, we will prove that $$ \frac{S_{n}}{\sqrt{V_{n}}}\overset{d}\rightarrow \frac{W_{1}}{\sqrt{\langle W\rangle_{1}}}~\text{under}~\hat{\mathbb{E}}, $$ where W t is a G-Brownian motion and 〈W〉 t is its quadratic variation process. A very interesting phenomenon of G-Brownian motion is that its quadratic variation process is also a continuous process with independent and stationary increments, and thus can still be regarded as a Brownian motion. When the sub-linear expectation \(\hat {\mathbb {E}}\) reduces to a linear one, W t is the classical Brownian motion with W 1∼N(0,σ 2) and 〈W〉 t =t σ 2, and then (6) is just (3). Our main results on the self-normalized central limit theorem will be given in Section "Main results", where the process of the self-normalized partial sums \({S_{[nt]}}/{\sqrt {V_{n}}}\) is proved to converge to a self-normalized G-Brownian motion \({W_{t}}/{\sqrt {\langle W\rangle _{1}}}\). We also consider the case in which the second moments of X i 's are infinite and obtain the self-normalized central limit theorem under a condition similar to (4). In the next section, we state basic settings in a sub-linear expectation space, including capacity, independence, identical distribution, G-Brownian motion, etc. One can skip this section if these concepts are familiar. To prove the self-normalized central limit theorem, we establish a new Donsker's invariance principle in Section "Invariance principle" with the limit process being a generalized G-Brownian motion. The proof is given in the last section. We use the framework and notations of Peng (2008b). Let \((\Omega,\mathcal F)\) be a given measurable space and let be a linear space of real functions defined on \((\Omega,\mathcal F)\) such that if , then for each \(\varphi \in C_{b}(\mathbb {R}^{n})\bigcup C_{l,Lip}(\mathbb {R}^{n})\), where \(C_{b}(\mathbb R^{n})\) denotes the space of all bounded continuous functions and \(C_{l,Lip}(\mathbb {R}^{n})\) denotes the linear space of (local Lipschitz) functions φ satisfying $$\begin{array}{@{}rcl@{}} & |\varphi(\boldsymbol{x}) - \varphi(\boldsymbol{y})| \le C(1 + |\boldsymbol{x}|^{m} + |\boldsymbol{y}|^{m})|\boldsymbol{x}- \boldsymbol{y}|, \;\; \forall \boldsymbol{x}, \boldsymbol{y} \in \mathbb R^{n},&\\ & \text {for some}~C > 0, m \in \mathbb N~\text{depending on}~\varphi. & \end{array} $$ is considered as a space of "random variables." In this case, we denote . Further, we let \(C_{b,Lip}(\mathbb R^{n})\) denote the space of all bounded and Lipschitz functions on \(\mathbb R^{n}\). Sub-linear expectation and capacity Definition 1 A sub-linear expectation \(\hat {\mathbb {E}}\) on is a function satisfying the following properties: for all , we have Monotonicity: If X≥Y then \(\hat {\mathbb {E}} [X]\ge \hat {\mathbb {E}} [Y]\); Constant preserving:\(\hat {\mathbb {E}} [c] = c\); Sub-additivity:\(\hat {\mathbb {E}}[X+Y]\le \hat {\mathbb {E}} [X] +\hat {\mathbb {E}} [Y ]\) whenever \(\hat {\mathbb {E}} [X] +\hat {\mathbb {E}} [Y ]\) is not of the form +∞−∞ or −∞+∞; Positive homogeneity:\(\hat {\mathbb {E}} [\lambda X] = \lambda \hat {\mathbb {E}} [X]\), λ≥0. Here \(\overline {\mathbb R}=[-\infty, \infty ]\). The triple is called a sub-linear expectation space. Given a sub-linear expectation \(\hat {\mathbb {E}} \), let us denote the conjugate expectation \(\widehat {\mathcal {E}}\)of \(\hat {\mathbb {E}}\) by Next, we introduce the capacities corresponding to the sub-linear expectations. Let \(\mathcal G\subset \mathcal F\). A function \(V:\mathcal G\rightarrow [0,1]\) is called a capacity if $$ V(\emptyset)=0, \;V(\Omega)=1, \;~\text{and}~V(A)\le V(B)\;\; \forall\; A\subset B, \; A,B\in \mathcal G. $$ It is called sub-additive if \(V(A\bigcup B)\le V(A)+V(B)\) for all \(A,B\in \mathcal G\) with \(A\bigcup B\in \mathcal G\). Let be a sub-linear space and \(\widehat {\mathcal {E}} \) be the conjugate expectation of \(\hat {\mathbb {E}}\). We introduce the pair \((\mathbb {V},\mathcal {V})\) of capacities by setting where A c is the complement set of A. Then, \(\mathbb {V}\) is sub-additive and Further, we define an extension of \(\hat {\mathbb {E}}^{\ast }\) of \(\hat {\mathbb {E}}\) by where inf∅=+∞. Then, Independence and distribution (Peng (2006, 2008b)) (Identical distribution) Let X 1 and X 2 be two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces and . They are called identically distributed, denoted by \(\boldsymbol X_{1}\overset {d}= \boldsymbol X_{2}\) if $$ \hat{\mathbb{E}}_{1}[\varphi(\boldsymbol X_{1})]=\hat{\mathbb{E}}_{2}[\varphi(\boldsymbol X_{2})], \;\; \forall \varphi\in C_{l,Lip}(\mathbb R^{n}), $$ whenever the sub-expectations are finite. A sequence {X n ;n≥1} of random variables is said to be identically distributed if \(X_{i}\overset {d}= X_{1}\) for each i≥1. (Independence) In a sub-linear expectation space , a random vector is said to be independent to another random vector under \(\hat {\mathbb {E}}\) if for each test function \(\varphi \in C_{l,Lip}(\mathbb R^{m} \times \mathbb R^{n})\) we have $$ \hat{\mathbb{E}} [\varphi(\boldsymbol{X}, \boldsymbol{Y})] = \hat{\mathbb{E}} \left[\hat{\mathbb{E}}[\varphi(\boldsymbol{x}, \boldsymbol{Y})]\big|_{\boldsymbol{x}=\boldsymbol{X}}\right], $$ whenever \(\overline {\varphi }(\boldsymbol {x}):=\hat {\mathbb {E}}\left [|\varphi (\boldsymbol {x}, \boldsymbol {Y})|\right ]<\infty \) for all x and \(\hat {\mathbb {E}}\left [|\overline {\varphi }(\boldsymbol {X})|\right ]<\infty \). (IID random variables) A sequence of random variables {X n ;n≥1} is said to be independent and identically distributed (IID), if \(X_{i}\overset {d}=X_{1}\) and X i+1 is independent to (X 1,…,X i ) for each i≥1. G-normal distribution, G-Brownian motion and its quadratic variation Let \(0<\underline {\sigma }\le \overline {\sigma }<\infty \) and \(G(\alpha)=\frac {1}{2}\left (\overline {\sigma }^{2} \alpha ^{+} - \underline {\sigma }^{2} \alpha ^{-}\right)\). X is called a normal \(N\left (0, \left [\underline {\sigma }^{2}, \overline {\sigma }^{2}\right ]\right)\) distributed random variable (written as \(X\sim N\left (0, \left [\underline {\sigma }^{2}, \overline {\sigma }^{2}\right ]\right)\)) under \(\hat {\mathbb {E}}\), if for any bounded Lipschitz function φ, the function \(u(x,t)=\hat {\mathbb {E}}\left [\varphi \left (x+\sqrt {t} X\right)\right ]\) (\(x\in \mathbb R, t\ge 0\)) is the unique viscosity solution of the following heat equation: $$ \partial_{t} u -G\left(\partial_{xx}^{2} u\right) =0, \;\; u(0,x)=\varphi(x). $$ Let C[0,1] be a function space of continuous functions on [0,1] equipped with the supremum norm \(\|x\|=\sup \limits _{0\le t\le 1}|x(t)|\) and C b (C[0,1]) is the set of bounded continuous functions \(h(x):C[0,1]\rightarrow \mathbb R\). The modulus of the continuity of an element x∈C[0,1] is defined by $$\omega_{\delta}(x)=\sup_{|t-s|<\delta}|x(t)-x(s)|. $$ It is showed that there is a sub-linear expectation space with \(\widetilde {\Omega }= C[0,1]\) and such that is a Banach space, and the canonical process \(W(t)(\omega) = \omega _{t} (\omega \in \widetilde {\Omega })\) is a G-Brownian motion with \(W(1)\sim N\left (0, \left [\underline {\sigma }^{2}, \overline {\sigma }^{2}\right ]\right)\) under \(\widetilde {\mathbb E}\), i.e., for all 0≤t 1<…<t n ≤1, \(\varphi \in C_{l,lip}(\mathbb R^{n})\), $$ \widetilde{\mathbb E}\left[\varphi\left(W(t_{1}),\ldots, W(t_{n-1}), W(t_{n})-W(t_{n-1})\right)\right] =\widetilde{\mathbb E}\left[\psi\left(W(t_{1}),\ldots, W(t_{n-1})\right)\right], $$ where \(\psi \left (x_{1},\ldots, x_{n-1}\right)\big)=\widetilde {\mathbb {E}}\left [\varphi \left (x_{1},\ldots, x_{n-1}, \sqrt {t_{n}-t_{n-1}}W(1)\right)\right ]\) (cf. Peng (2006, 2008a, 2010a), Denis et al. (2011)). The quadratic variation process of a G-Brownian motion W is defined by $$\langle W \rangle_{t}={\lim}_{\|\Pi_{t}^{N}\|\rightarrow 0}\sum_{j=1}^{N-1} \left(W\left(t_{j}^{N}\right)-W\left(t_{j-1}^{N}\right)\right)^{2}=W^{2}(t)-2\int_{0}^{t} W(t) dW(t), $$ where \(\Pi _{t}^{N}=\left \{t_{0}^{N},t_{1}^{N},\ldots, t_{N}^{n}\right \}\) is a partition of [0,t] and \(\left \|\Pi _{t}^{N}\right \|=\max _{j}\left |t_{j}^{N}-t_{j-1}^{N}\right |\), and the limit is taken in L 2, i.e., $$ {\lim}_{\left\|\Pi_{t}^{N}\right\|\rightarrow 0}\widetilde{\mathbb{E}}\left[\left(\sum_{j=1}^{N-1}\left(W\left(t_{j}^{N}\right)-W\left(t_{j-1}^{N}\right)\right)^{2}-\langle W \rangle_{t}\right)^{2}\right]=0. $$ The quadratic variation process 〈W〉 t is also a continuous process with independent and stationary increments. For the properties and the distribution of the quadratic variation process, one can refer to a book by Peng (2010a). Denis et al. (2011) showed the following representation of the G-Brownian motion (cf. Theorem 52). Lemma 1 Let be a probability measure space and {B(t)} t≥0 is a P-Brownian motion. Then, for all bounded continuous functions \(\varphi : C_{b}[0,1]\rightarrow \ \mathbb R\), $$\widetilde{\mathbb E}\left[\varphi\left(W(\cdot)\right)\right]=\sup_{\theta\in \Theta}\mathsf{E}_{P}\left[\varphi\left(W_{\theta}(\cdot)\right)\right],\;\; W_{\theta}(t) = \int_{0}^{t}\theta(s) dB(s), $$ For the reminder of this paper, the sequences {X n ;n≥1}, {Y n ;n≥1}, etc., of the random variables are considered in . Without specification, we suppose that {X n ;n≥1} is a sequence of independent and identically distributed random variables in with \(\hat {\mathbb {E}}[X_{1}]=\widehat {\mathcal {E}}[X_{1}]=0\), \(\hat {\mathbb {E}}\left [X_{1}^{2}\right ]=\overline {\sigma }^{2}\), and \(\widehat {\mathcal {E}}\left [X_{1}^{2}\right ]=\underline {\sigma }^{2}\). Denote \(S_{0}^{X}=0\), \(S_{n}^{X}=\sum _{k=1}^{n} X_{k}\), V 0=0, \(V_{n}=\sum _{k=1}^{n} X_{k}^{2}\). And suppose that is a sub-linear expectation space which is rich enough such that there is a G-Brownian motion W(t) with \(W(1)\sim N\left (0,\left [\underline {\sigma }^{2},\overline {\sigma }^{2}\right ]\right)\). We denote a pair of capacities corresponding to the sub-linear expectation \(\widetilde {\mathbb E}\) by \(\left (\widetilde {\mathbb {V}},\widetilde {\mathcal {V}}\right)\), and the extension of \(\widetilde {\mathbb E}\) by \(\widetilde {\mathbb {E}}^{\ast }\). We consider the convergence of the process \(S_{[nt]}^{X}\). Because it is not in C[0,1], it needs to be modified. Define the C[0,1]-valued random variable \(\widetilde {S}_{n}^{X}(\cdot)\) by setting $$\widetilde{S}_{n}^{X}(t)= \left\{\begin{array}{cc} \sum_{j=1}^{k} X_{j}, \; \text{if}~t=k/n \; (k=0,1,\ldots, n);\\ \text{extended by linear interpolation in each interval }\\ \qquad \quad \left[[k-1]n^{-1}, kn^{-1}\right]. \end{array}\right.$$ Then, \( \widetilde {S}_{n}^{X}(t)=S_{[nt]}^{X}+(nt-[nt])X_{[nt]+1}\). Here [nt] is the largest integer less than or equal to nt. Zhang (2015) obtained the functional central limit theorem as follows. Theorem 1 Suppose \(\hat {\mathbb {E}}\left [\left (X_{1}^{2}-b\right)^{+}\right ]\rightarrow 0\) as b→∞. Then, for all bounded continuous functions \(\varphi :C[0,1]\rightarrow \mathbb {R}\), $$ \hat{\mathbb{E}}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\right]\rightarrow \widetilde{\mathbb{E}}\left[{\phantom{2_{1}^{2}}}\!\!\!\!\!\varphi\left(W(\cdot) \right)\! \right]. $$ Replacing the normalization factor \(\sqrt {n}\) by \(\sqrt {V_{n}}\), we obtain the self-normalized process of partial sums: $$W_{n}(t)=\frac{\widetilde{S}_{n}^{X}(t)}{\sqrt{V_{n}}}, $$ where \(\frac {0}{0}\) is defined to be 0. Our main result is the following self-normalized functional central limit theorem (FCLT). $$ \hat{\mathbb{E}}^{\ast}\left[\varphi\left(W_{n}(\cdot)\right)\right]\rightarrow\widetilde{\mathbb{E}}\left[\varphi\left(\frac{W(\cdot)}{\sqrt{\langle W \rangle_{1}}}\right) \right]. $$ In particular, for all bounded continuous functions \(\varphi :\mathbb {R}\rightarrow \mathbb {R}\), $$ \begin{aligned} \hat{\mathbb{E}}^{\ast}\left[\varphi\left(\frac{S_{n}^{X}}{\sqrt{V_{n}}}\right)\right]\rightarrow & \widetilde{\mathbb{E}}\left[\varphi\left(\frac{W(1)}{\sqrt{\langle W \rangle_{1}}}\right) \right]\\ &=\sup_{\theta\in \Theta}\textsf{E}_{P}\left[\varphi\left(\frac{\int_{0}^{1}\theta(s) d B(s)}{\sqrt{\int_{0}^{1} \theta^{2}(s) ds }}\right) \right]. \end{aligned} $$ It is obvious that $$\widetilde{\mathbb{E}}\left[\varphi\left(\frac{W(\cdot)}{\sqrt{\langle W \rangle_{1}}}\right) \right] \ge \textsf{E}_{P} \left[\varphi\left(B(\cdot)\right) \right]. $$ An interesting problem is how to estimate the upper bounds of the expectations on the right hand side of (10) and (11). Further, \(\frac {W(\cdot)}{\sqrt {\langle W\rangle _{1}}}\overset {d}=\frac {\overline {W}(\cdot)}{\sqrt {\langle \overline {W}\rangle _{1}}}\), where \(\overline {W}(t)\) is a G-Brownian motion with \(\overline {W}(1)\sim N(0,[r^{-2},1])\), \(r^{2}=\overline {\sigma }^{2}/\underline {\sigma }^{2}\). For the classical self-normalized central limit theorem, Giné et al. (1997) showed that the finiteness of the second moments can be relaxed to the condition (4). Csörgő et al. (2003) proved the self-normalized functional central limit theorem under (4). The next theorem gives a similar result under the sub-linear expectation and is an extension of Theorem 2. Let {X n ;n≥1} be a sequence of independent and identically distributed random variables in the sub-linear expectation space with \(\hat {\mathbb {E}}[X_{1}]=\widehat {\mathcal {E}}[X_{1}]=0\). Denote \(l(x)=\hat {\mathbb {E}}\left [X_{1}^{2}\wedge x^{2}\right ]\). Suppose \(x^{2}\mathbb {V}(|X_{1}|\ge x)=o\left (l(x)\right)\) as x→∞; \({\lim }_{x\rightarrow \infty } \frac {\hat {\mathbb {E}}\left [X_{1}^{2}\wedge x^{2}\right ]}{\widehat {\mathcal {E}}\left [X_{1}^{2}\wedge x^{2}\right ]}=r^{2}<\infty \); \(\hat {\mathbb {E}}[(|X_{1}|-c)^{+}]\rightarrow 0\) as c→∞. Then, the conclusions of Theorem 2 remain true with W(t) being a G-Brownian motion such that W(1)∼N(0,[r −2,1]). Note for c>1, \(l(cx)=\hat {\mathbb {E}}\left [X_{1}^{2}\wedge (cx)^{2}\right ]\le l(x)+(cx)^{2}\mathbb {V}(|X_{1}|\ge x)\). Condition (I) implies that l(cx)/l(x)→1 as x→∞, i.e., l(x) is a slowly varying function. Therefore, there is a constant C such that \(\int _{x}^{\infty }y^{-2}l(y)dy \le C x^{-1} l(x)\) if x is large enough. So, \(\int _{x}^{\infty }\mathbb {V}(|X_{1}|\ge y)dy=o(x^{-1}l(x))\). Also, by Lemma 3.9 (b) of Zhang (2016), condition (III) implies that \(\hat {\mathbb {E}}\left [(|X_{1}|-x)^{+}\right ]\le \int _{x}^{\infty }\mathbb {V}(|X_{1}|\ge y)dy\). Hence, \(\hat {\mathbb {E}}\left [\left (|X_{1}|-x\right)^{+}\right ]=o(x^{-1}l(x))\) if conditions (I) and (III) are satisfied. When \(\hat {\mathbb {E}}\) is a continuous sub-linear expectation, then for any random variable Y we have \(\hat {\mathbb {E}}[|Y|]\le \int _{0}^{\infty }\mathbb {V}(|Y|\ge y)dy\) by Lemma 3.9 (c) of Zhang (2016), and so the condition (III) can be removed. Here, \(\hat {\mathbb {E}}\) is called continuous if, for any with \(\hat {\mathbb {E}}[X_{n}],\hat {\mathbb {E}}[X]<\infty \), \(\hat {\mathbb {E}}[X_{n}]\nearrow \hat {\mathbb {E}}[X]\) whenever 0≤X n ↗X, and, \(\hat {\mathbb {E}}[X_{n}]\searrow \hat {\mathbb {E}}[X]\) whenever X n ↘X. Invariance principle To prove Theorems 2 and 3, we will prove a new Donsker's invariance principle. Let {(X i ,Y i );i≥1} be a sequence of independent and identically distributed random vectors in the sub-linear expectation space with \(\hat {\mathbb {E}}[X_{1}]=\hat {\mathbb {E}}[-X_{1}]=0\), \(\hat {\mathbb {E}}[X_{1}^{2}]=\overline {\sigma }^{2}\), \(\widehat {\mathcal {E}}[X_{1}^{2}]=\underline {\sigma }^{2}\), \(\hat {\mathbb {E}}[Y_{1}]=\overline {\mu }\), \(\widehat {\mathcal {E}}[Y_{1}]=\underline {\mu }\). Denote $$ G(p,q)=\hat{\mathbb{E}}\left[\frac{1}{2} q X_{1}^{2}+pY_{1}\right], \;\; p,q\in \mathbb R. $$ Let ξ be a G-normal distributed random variable, η be a maximal distributed random variable such that the distribution of (ξ,η) is characterized by the following parabolic partial differential equation (PDE) defined on \([0,\infty)\times \mathbb {R}\times \mathbb {R}\): $$ \partial_{t} u -G\left(\partial_{y} u, \partial_{xx}^{2} u\right) =0, $$ i.e., if for any bounded Lipschitz function \(\varphi (x,y):\mathbb {R}^{2}\rightarrow \mathbb {R}\), the function \(u(x,y,t)=\widetilde {\mathbb {E}}\left [\varphi \left (x+\sqrt {t} \xi, y+t\eta \right)\right ]\) (\(x,y \in \mathbb {R}, t\ge 0\)) is the unique viscosity solution of the PDE (13) with Cauchy condition u| t=0=φ. Further, let B t and b t be two random processes such that the distribution of the process (B ·,b ·) is characterized by B 0=0, b 0=0; for any 0≤t 1≤…≤t k ≤s≤t+s, (B s+t −B s ,b s+t −b s ) is independent to \((B_{t_{j}}, b_{t_{j}}), j=1,\ldots,k\), in sense that, for any \(\varphi \in C_{l,Lip}(\mathbb {R}^{2(k+1)})\), $$ \begin{aligned} & \widetilde{\mathbb{E}}\left[\varphi\left((B_{t_{1}}, b_{t_{1}}),\ldots,(B_{t_{k}}, b_{t_{k}}), (B_{s+t}-B_{s}, b_{s+t}-b_{s})\right)\right]\\ &\qquad = \widetilde{\mathbb{E}}\left[\psi\left((B_{t_{1}}, b_{t_{1}}),\ldots,(B_{t_{k}}, b_{t_{k}})\right)\right], \end{aligned} $$ $$ \begin{aligned} \psi\left((x_{1}, y_{1}),\ldots,(x_{k}, y_{k})\right)= \widetilde{\mathbb{E}}\left[\varphi\left((x_{1}, y_{1}),\ldots,(x_{k}, y_{k})\right.\right.,\\ \left.\left.(B_{s+t}-B_{s}, b_{s+t}-b_{s})\right)\right]; \end{aligned} $$ for any t,s>0, \((B_{s+t}-B_{s},b_{s+t}-b_{s})\overset {d}\sim (B_{t},b_{t})\) under \(\widetilde {\mathbb {E}}\); for any t>0, \((B_{t},b_{t})\overset {d}\sim \left (\sqrt {t}B_{1}, tb_{1}\right)\) under \(\widetilde {\mathbb {E}}\); the distribution of (B 1,b 1) is characterized by the PDE (13). It is easily seen that B t is a G-Brownian motion with \(B_{1}\sim N\left (0,[\underline {\sigma }^{2},\overline {\sigma }^{2}]\right)\), and (B t ,b t ) is a generalized G-Brownian motion introduced by Peng (2010a). The existence of the generalized G-Brownian motion can be found in Peng (2010a). Suppose \(\hat {\mathbb {E}}\left [(X_{1}^{2}-b)^{+}\right ]\rightarrow 0\) and \(\hat {\mathbb {E}}\left [(|Y_{1}|-b)^{+}\right ]\rightarrow 0\) as b→∞. Let $$\widetilde{\boldsymbol{W}}_{n}(t)=\left(\frac{\widetilde{S}_{n}^{X}(t)}{\sqrt{n}}, \frac{\widetilde{S}_{n}^{Y}(t)}{n}\right). $$ Then, for any bounded continuous function \(\varphi :C[0,1]\times C[0,1]\rightarrow \mathbb R\), $$ {\lim}_{n\rightarrow \infty}\hat{\mathbb{E}}\left[\varphi\left(\widetilde{\boldsymbol{W}}_{n}(\cdot) \right)\right]= \widetilde{\mathbb{E}}\left[\varphi\left(B_{\cdot},b_{\cdot}\right)\right]. $$ Further, let p≥2, q≥1, and assume \(\hat {\mathbb {E}}[|X_{1}|^{p}]<\infty \), \(\hat {\mathbb {E}}[|Y_{1}|^{q}]<\infty \). Then, for any continuous function \(\varphi :C[0,1]\times C[0,1]\rightarrow \mathbb R\) with |φ(x,y)|≤C(1+∥x∥p+∥y∥q), $$ {\lim}_{n\rightarrow \infty}\hat{\mathbb{E}}^{\ast}\left[\varphi\left(\widetilde{\boldsymbol{W}}_{n}(\cdot) \right)\right]= \widetilde{\mathbb{E}}\left[\varphi\left(B_{\cdot},b_{\cdot}\right)\right]. $$ Here ∥x∥= sup0≤t≤1|x(t)| for x∈C[0,1]. When X k and Y k are random vectors in \(\mathbb R^{d}\) with \(\hat {\mathbb {E}}[X_{k}]=\hat {\mathbb {E}}[-X_{k}]=0\), \(\hat {\mathbb {E}}[(\|X_{1}\|^{2}-b)^{+}]\rightarrow 0\) and \(\hat {\mathbb {E}}[(\|Y_{1}\|-b)^{+}]\rightarrow 0\) as b→∞. Then, the function G in (12) becomes $$ G(p,A)=\hat{\mathbb{E}}\left[\frac{1}{2}\langle AX_{1},X_{1}\rangle+\langle p,Y_{1}\rangle\right],\;\; p\in \mathbb R^{d}, A\in\mathbb S(d), $$ where \(\mathbb S(d)\) is the collection of all d×d symmetric matrices. The conclusion of Theorem 4 remains true with the distribution of (B 1,b 1) being characterized by the following parabolic partial differential equation defined on \([0,\infty)\times \mathbb {R}^{d}\times \mathbb {R}^{d}\): $$ \partial_{t} u -G\left(D_{y} u, D_{xx}^{2} u\right) =0,\;\; u|_{t=0}=\varphi, $$ where \(D_{y} =(\partial _{y_{i}})_{i=1}^{n}\) and \(D_{xx}^{2}=(\partial _{x_{i}x_{j}}^{2})_{i,j=1}^{d}\). As a conclusion of Theorem 4, we have $$ \hat{\mathbb{E}}\left[\varphi\left(\frac{S_{n}^{X}}{\sqrt{n}},\frac{S_{n}^{Y}}{n}\right)\right]\rightarrow \widetilde{\mathbb{E}}\left[{\phantom{2_{0}^{0}}}\!\!\!\!\!\varphi(B_{1},b_{1})\right],\;\; \varphi\in C_{b}(\mathbb{R}^{2}). $$ This is proved by Peng (2010a) under the conditions \(\hat {\mathbb {E}}\left [\left |X_{1}\right |^{2+\delta }\right ]<\infty \) and \(\hat {\mathbb {E}}\left [|Y_{1}|^{1+\delta }\right ]<\infty \) (cf. Theorem 3.6 and Remark 3.8 therein). When Y 1≡0, (15) becomes $${\lim}_{n\rightarrow \infty}\hat{\mathbb{E}}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\right]=\widetilde{\mathbb{E}} \left[\varphi\left(B_{\cdot}\right)\right],\;\; \varphi\in C_{b}(C[0,1]), $$ which is proved by Zhang (2015). Before the proof, we need several lemmas. For random vectors X n in and X in , we write \(\boldsymbol X_{n}\overset {d}\rightarrow \boldsymbol {X}\) if $$ \hat{\mathbb{E}}\left[\varphi(\boldsymbol{X}_{n})\right]\rightarrow \widetilde{\mathbb{E}}\left[\varphi(\boldsymbol{X})\right] $$ for any bounded continuous φ. Write \(\boldsymbol X_{n} \overset {\mathbb {V}}\rightarrow \boldsymbol {x}\) if \(\mathbb {V}(\|\boldsymbol {X}_{n}-\boldsymbol {x}\|\ge \epsilon)\rightarrow 0\) for any ε>0. {X n } is called uniformly integrable if $${\lim}_{b\rightarrow \infty}\limsup_{n\rightarrow \infty} \hat{\mathbb{E}}\left[(\|\boldsymbol{X}_{n}\|-b)^{+}\right]= 0. $$ The following three lemmas are obvious. If \(\boldsymbol {X}_{n}\overset {d}\rightarrow \boldsymbol {X}\) and φ is a continuous function, then \(\varphi (\boldsymbol {X}_{n})\overset {d}\rightarrow \varphi (\boldsymbol {X})\). (Slutsky's Lemma) Suppose \(\boldsymbol {X}_{n}\overset {d}\rightarrow \boldsymbol {X}\), \(\boldsymbol {Y}_{n} \overset {\mathbb {V}}\rightarrow \boldsymbol {y}\), \(\eta _{n}\overset {\mathbb {V}}\rightarrow a\), where a is a constant and y is a constant vector, and \(\widetilde {\mathbb {V}}(\|\boldsymbol {X}\|>\lambda)\rightarrow 0\) as λ→∞. Then, \((\boldsymbol {X}_{n}, \boldsymbol {Y}_{n}, \eta _{n})\overset {d}\rightarrow (\boldsymbol {X},\boldsymbol {y}, a)\), and as a result, \(\eta _{n}\boldsymbol {X}_{n}+\boldsymbol {Y}_{n}\overset {d}\rightarrow a\boldsymbol {X}+\boldsymbol {y}\). Suppose \(\boldsymbol {X}_{n}\overset {d}\rightarrow \boldsymbol {X}\). Then, \(\widetilde {\mathbb {V}}(\|\boldsymbol {X}\|>\lambda)\rightarrow 0\) as λ→∞ is equivalent to the tightness of {X n ;n≥1}, i.e., $$ {\lim}_{\lambda\rightarrow \infty} \limsup_{n\rightarrow \infty} \mathbb{V}\left(\|\boldsymbol{X}_{n}\|>\lambda\right)=0, $$ because for all ε>0, we can define a continuous function φ(x) such that I{x>λ+ε}≤φ(x)≤I{x>λ] and so $$\begin{aligned} &\widetilde{\mathbb{V}}(\|\boldsymbol{X}\|>\lambda+\epsilon)\le \widetilde{\mathbb{E}}[\varphi(\|\boldsymbol{X}\|)]={\lim}_{n\rightarrow \infty} \hat{\mathbb{E}}[\varphi(\|\boldsymbol{X}_{n}\|)] \le \limsup_{n\rightarrow \infty} \mathbb{V}\left(\|\boldsymbol{X}_{n}\|> \lambda\right), \\ &\limsup_{n\rightarrow \infty} \mathbb{V}\left(\|\boldsymbol{X}_{n}\|> \lambda+\epsilon\right)\le {\lim}_{n\rightarrow \infty} \hat{\mathbb{E}}[\varphi(\|\boldsymbol{X}_{n}\|)]=\widetilde{\mathbb{E}}[\varphi(\|\boldsymbol{X}\|)] \le \widetilde{\mathbb{V}}(\|\boldsymbol{X}\|> \lambda). \end{aligned} $$ Suppose \(\boldsymbol {X}_{n}\overset {d}\rightarrow \boldsymbol {X}\). If {X n } is uniformly integrable and \(\widetilde {\mathbb {E}}[((\|\boldsymbol {X}\|-b)^{+}]\rightarrow 0\) as b→∞, then, $$ \hat{\mathbb{E}}[\boldsymbol{X}_{n}]\rightarrow \widetilde{\mathbb{E}}[\boldsymbol{X}]. $$ If \(\sup _{n}\hat {\mathbb {E}}[|\boldsymbol X_{n}\|^{q}<\infty \) and \(\widetilde {\mathbb {E}} [|\boldsymbol {X}\|^{q}<\infty \) for some q>1, then (17) holds. The following lemma is proved by Zhang (2015). Suppose that \(\boldsymbol {X}_{n}\overset {d}\rightarrow \boldsymbol {X}\), \(\boldsymbol {Y}_{n}\overset {d}\rightarrow \boldsymbol {Y}\), Y n is independent to X n under \(\hat {\mathbb {E}}\) and \(\widetilde {\mathbb {V}}(\|\boldsymbol {X}\|>\lambda)\rightarrow 0\) and \(\widetilde {\mathbb {V}}(\|\boldsymbol {Y}\|>\lambda)\rightarrow 0\) as λ→∞. Then \( (\boldsymbol {X}_{n},\boldsymbol {Y}_{n})\overset {d}\rightarrow (\overline {\boldsymbol {X}},\overline {\boldsymbol {Y}}), \) where \(\overline {\boldsymbol {X}}\overset {d}=\boldsymbol {X}\), \(\overline {\boldsymbol {Y}}\overset {d}=\boldsymbol {Y}\) and \(\overline {\boldsymbol {Y}}\) is independent to \(\overline {\boldsymbol {X}}\) under \(\widetilde {\mathbb {E}}\). The next lemma is about the Rosenthal-type inequalities due to Zhang (2016). Let {X 1,…,X n } be a sequence of independent random variables in . Suppose p≥2. Then, $$ \begin{aligned} \hat{\mathbb{E}}\left[\max_{k\le n} \left|S_{k}\right|^{p}\right]&\le C_{p}\left\{ \sum_{k=1}^{n} \hat{\mathbb{E}} \left[|X_{k}|^{p}\right]+\left(\sum_{k=1}^{n} \hat{\mathbb{E}} \left[|X_{k}|^{2}\right]\right)^{p/2} \right. \\ & \qquad \left. +\left(\sum_{k=1}^{n} \left[\left(\widehat{\mathcal{E}} [X_{k}]\right)^{-}+\left(\hat{\mathbb{E}} [X_{k}]\right)^{+}\right]\right)^{p}\right\}. \end{aligned} $$ Suppose \(\hat {\mathbb {E}}[X_{k}]\le 0\), k=1,…,n. Then, $$ \hat{\mathbb{E}}\left[\left|\max_{k\le n} (S_{n}-S_{k})\right|^{p}\right] \le 2^{2-p}\sum_{k=1}^{n} \hat{\mathbb{E}} [|X_{k}|^{p}], \;\; \text{for} 1\le p\le 2 $$ $$ \begin{aligned} \hat{\mathbb{E}}\left[\left|\max_{k\le n}(S_{n}- S_{k})\right|^{p}\right] &\le C_{p}\left\{ \sum_{k=1}^{n} \hat{\mathbb{E}} \left[|X_{k}|^{p}\right]+\left(\sum_{k=1}^{n} \hat{\mathbb{E}} \left[|X_{k}|^{2}\right]\right)^{p/2}\right\} \\ &\le C_{p} n^{p/2-1} \sum_{k=1}^{n} \hat{\mathbb{E}} [|X_{k}|^{p}], \;\; \text{for}~p\ge 2. \end{aligned} $$ Suppose \(\hat {\mathbb {E}}[X_{1}]=\hat {\mathbb {E}}[-X_{1}]=0\) and \(\hat {\mathbb {E}}\left [X_{1}^{2}\right ]<\infty \). Let \(\overline {X}_{n,k}=(-\sqrt {n})\vee X_{k}\wedge \sqrt {n}\), \(\widehat {X}_{n,k}=X_{k}-\overline {X}_{n,k}\), \(\overline {S}_{n,k}^{X}=\sum _{j=1}^{k} \overline {X}_{n,j}\) and \(\widehat {S}_{n,k}^{X}=\sum _{j=1}^{k}\widehat {X}_{n,j}\), k=1,…,n. Then $$\begin{aligned} \hat{\mathbb{E}}\left[\max_{k\le n} \left|\frac{\overline{S}_{n,k}^{X}}{\sqrt{n}}\right|^{q}\right]\le C_{q}, \;\; ~\text{for all} ~q\ge 2, \end{aligned} $$ $$ {\lim}_{n\rightarrow \infty} \hat{\mathbb{E}}\left[\max_{k\le n} \left|\frac{\widehat{S}_{n,k}^{X}}{\sqrt{n}}\right|^{p}\right]=0 $$ whenever \(\hat {\mathbb {E}}[(|X_{1}|^{p}-b)^{+}]\rightarrow 0\) as b→∞ if p=2, and \(\hat {\mathbb {E}}[|X_{1}|^{p}]<\infty \) if p>2. Note \(\hat {\mathbb {E}}[X_{1}]=\widehat {\mathcal {E}}[X_{1}]=0\). So, \(|\widehat {\mathcal {E}}[\overline {X}_{n,1}]|=|\widehat {\mathcal {E}}[X_{1}]-\widehat {\mathcal {E}}[\overline {X}_{n,1}]|\le \hat {\mathbb {E}}|\widehat {X}_{n,1}|\le \hat {\mathbb {E}}[(|X_{1}|^{2}-n)^{+}]n^{-1/2}\) and \(|\hat {\mathbb {E}}[\overline {X}_{n,1}]|=|\hat {\mathbb {E}}[X_{1}]-\hat {\mathbb {E}}[\overline {X}_{n,1}]|\le \hat {\mathbb {E}}|\widehat {X}_{n,1}|\le \hat {\mathbb {E}}[(|X_{1}|^{2}-n)^{+}]n^{-1/2}\). By Rosenthal's inequality (cf. (18)), $$\begin{aligned} & \hat{\mathbb{E}}\left[\max_{k\le n} \left|\overline{S}_{n,k}^{X}\right|^{q}\right] \le C_{p}\left\{ n \hat{\mathbb{E}} [|\overline{X}_{n,1}|^{q}+\left(n \hat{\mathbb{E}} \left[|\overline{X}_{n,1}|^{2}\right]\right)^{q/2}\right.\\ & \qquad\qquad \qquad\qquad \left.+\left(n\left[\left(\widehat{\mathcal{E}} [\overline{X}_{n,1}]\right)^{-}+\left(\hat{\mathbb{E}} [\overline{X}_{n,1}]\right)^{+}\right]\right)^{q}\right\}\\ & \;\; \le C_{q}\left\{ n n^{q/2-1}\hat{\mathbb{E}} \left[|X_{1}|^{2}\right]+n^{q/2}\left(\hat{\mathbb{E}} \left[X_{1}^{2}\right]\right)^{q/2}+\left(nn^{-1/2}\hat{\mathbb{E}}\left[\left(X_{1}^{2}-n\right)^{+}\right]\right)^{q}\right\} \\ & \;\; \le C_{q} n^{q/2}\left\{\hat{\mathbb{E}} \left[\left|X_{1}\right|^{2}\right]+\left(\hat{\mathbb{E}} \left[X_{1}^{2}\right]\right)^{q}\right\}, \;\; \text{for all~} q\ge 2 \end{aligned} $$ $$\begin{aligned} \hat{\mathbb{E}}\left[\max_{k\le n} \left|\widehat{S}_{n,k}^{X}\right|^{p}\right] \le & C_{p}\left\{ n \hat{\mathbb{E}} \left[|\widehat{X}_{n,1}|^{p}\right]+\left(n \hat{\mathbb{E}} \left[|\widehat{X}_{n,1}|^{2}\right]\right)^{p/2}\right. \\ & \left.\qquad +\left(n\left[\left(\widehat{\mathcal{E}} [\widehat{X}_{n,1}]\right)^{-}+\left(\hat{\mathbb{E}} [\widehat{X}_{n,1}]\right)^{+}\right]\right)^{p}\right\}\\ \le & C_{p}\left\{ n\hat{\mathbb{E}} \left[\big(|X_{1}|^{p}-n^{p/2}\big)^{+}\right]+n^{p/2}\left(\hat{\mathbb{E}} \left[(X_{1}^{2}-n)^{+}\right]\right)^{p/2}\right.\\ & \left.\qquad +n^{p/2}\left(\hat{\mathbb{E}} \left[(X_{1}^{2}-n)^{+}\right]\right)^{p}\right\},\; p\ge 2. \end{aligned} $$ The proof is completed. □ (a) Suppose p≥2, \(\hat {\mathbb {E}}[X_{1}]=\hat {\mathbb {E}}[-X_{1}]=0\), \(\hat {\mathbb {E}}[(X_{1}^{2}-b)^{+}]\rightarrow 0\) as b→∞ and \(\hat {\mathbb {E}}[|X_{1}|^{p}]<\infty \). Then, $$ \left\{\max_{k\le n}\left|\frac{S_{k}^{X}}{\sqrt{n}}\right|^{p}\right\}_{n=1}^{\infty} \; \text{is uniformly integrable and therefore is tight}. $$ (b) Suppose p≥1, \(\hat {\mathbb {E}}\left [(|Y_{1}|-b)^{+}\right ]\rightarrow 0\) as b→∞, and \(\hat {\mathbb {E}}[|Y_{1}|^{p}]<\infty \). Then, $$ \left\{\max_{k\le n}\left|\frac{S_{k}^{Y}}{n}\right|^{p}\right\}_{n=1}^{\infty} \; \text{is uniformly integrable and therefore is tight}. $$ (a) follows from Lemma 6. (b) is obvious by noting $$\begin{aligned} & \hat{\mathbb{E}}\left[\left(\left(\frac{\max_{k\le n}|S_{k}^{Y}|}{n}-b\right)^{+}\right)^{p}\right] \le \hat{\mathbb{E}}\left[\left(\frac{\sum_{k=1}^{n} (|Y_{k}|-b)^{+}}{n}\right)^{p}\right]\\ \le& C_{p} \left(\frac{\sum_{k=1}^{n} \hat{\mathbb{E}}[(|Y_{k}|-b)^{+}]}{n}\right)^{p} \\ & \qquad +C_{p} \frac{\hat{\mathbb{E}}\Big[\Big|\left(\sum_{k=1}^{n}\{ (|Y_{k}|-b)^{+}-\hat{\mathbb{E}}[(|Y_{k}|-b)^{+}]\}\right)^{+}\Big|^{p}\Big]}{n^{p}}\\ \le & C_{p}\left(\hat{\mathbb{E}}\left[(|Y_{1}|-b)^{+}\right]\right)^{p}+C_{p}\big(n^{-p/2}+n^{1-p}\big)\hat{\mathbb{E}}\big[(|Y_{1}|^{p}-b^{p})^{+}\big] \end{aligned} $$ by the Rosenthal-type inequalities (19) and (20). □ Suppose \(\hat {\mathbb {E}}\left [(|Y_{1}|-b)^{+}\right ]\rightarrow 0\) as b→∞. Then, for any ε>0, $$\mathbb{V}\left(\frac{S_{n}^{Y}}{n}>\hat{\mathbb{E}}[Y_{1}]+\epsilon\right)\rightarrow 0~\text{and}~\mathbb{V}\left(\frac{S_{n}^{Y}}{n}<\widehat{\mathcal{E}}[Y_{1}]-\epsilon\right)\rightarrow 0. $$ Let Y k,b =(−b)∨Y k ∧b, \(S_{n,1}=\sum _{k=1}^{n} Y_{k,b}\) and \(S_{n,2}=S_{n}^{Y}-S_{n,1}\). Note \(\hat {\mathbb {E}}\big [Y_{1,b}]\rightarrow \hat {\mathbb {E}}[Y_{1}]\) as b→∞. Suppose \(\left |\hat {\mathbb {E}}[Y_{1,b}] - \hat {\mathbb {E}}[Y_{1}]\right |<\epsilon /4\). Then, by Kolmogorov's inequality (cf. (19)), $$\begin{aligned} &\mathbb{V}\left(\frac{S_{n,1}}{n}>\hat{\mathbb{E}}[Y_{1}]+\epsilon/2\right)\le \mathbb{V}\left(\frac{S_{n,1}}{n}>\hat{\mathbb{E}}[Y_{1,b}]+\epsilon/4\right)\\ \le & \frac{16}{n^{2}\epsilon^{2}} \hat{\mathbb{E}}\left[\left(\Big(\sum_{k=1}^{n} \big(Y_{k,b}-\hat{\mathbb{E}}[Y_{k,b}]\big)\Big)^{+}\right)^{2}\right] \\ \le & \frac{32}{n^{2}\epsilon^{2}} \sum_{k=1}^{n} \hat{\mathbb{E}}\left[\big(Y_{k,b}-\hat{\mathbb{E}}[Y_{k,b}]\big)^{2}\right]\le \frac{32(2b)^{2}}{n\epsilon^{2}}\rightarrow 0. \end{aligned} $$ $$\begin{aligned} \mathbb{V}\left(\frac{S_{n,2}}{n}> \epsilon/2\right)\le & \frac{2}{n\epsilon} \sum_{k=1}^{n} \hat{\mathbb{E}}|Y_{k}-Y_{k,b}| \le \frac{2}{\epsilon}\hat{\mathbb{E}}\left[(|Y_{1}|-b)^{+}\right] \rightarrow 0~\text{as}~b \rightarrow \infty. \end{aligned} $$ It follows that $$\mathbb{V}\left(\frac{S_{n}^{Y}}{n}>\hat{\mathbb{E}}[Y_{1}]+\epsilon\right)\rightarrow 0. $$ By considering {−Y k } instead, we have $$\mathbb{V}\left(\frac{S_{n}^{Y}}{n}<\widehat{\mathcal{E}}[Y_{1}]-\epsilon\right)=\mathbb{V}\left(\frac{-S_{n}^{Y}}{n}>\hat{\mathbb{E}}[-Y_{1}]+\epsilon\right)\rightarrow 0.$$ Proof of Theorem 4. We first show the tightness of \(\widetilde {\boldsymbol W}_{n}\). It is easily seen that $$w_{\delta}\left(\frac{\widetilde{S}_{n}^{Y}(\cdot)}{n}\right) \le 2\delta b+\frac{\sum_{k=1}^{n} (|Y_{k}|-b)^{+}}{n}. $$ It follows that for any ε>0, if δ<ε/(4b), then $$\sup_{n}\mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{Y}(\cdot)}{n}\right)\ge \epsilon\right) \le \sup_{n} \mathbb{V}\left(\sum_{k=1}^{n} (|Y_{k}|-b)^{+}\ge n\frac{\epsilon}{2}\right)\le \frac{2}{\epsilon}\hat{\mathbb{E}}\left[(|Y_{1}|-b)^{+}\right]. $$ Letting δ→0 and then b→∞ yields $$\sup_{n}\mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{Y}(\cdot)}{n}\right)\ge \epsilon\right)\rightarrow 0~\text{as}~\delta \rightarrow 0. $$ For any η>0, we choose δ k ↓0 such that, if $$A_{k}=\left\{x: \omega_{\delta_{k}}(x)<\frac{1}{k}\right\}, $$ then \(\sup _{n}\mathbb {V}\left (\widetilde {S}_{n}^{Y}(\cdot)/n \in A_{k}^{c}\right)\le \eta /2^{k+1}\). Let A={x:|x(0)|≤a}, \(K_{2}=A\bigcap _{k=1}^{\infty }A_{k}\). Then, by the Arzelá-Ascoli theorem, K 2⊂C b (C[0,1]) is compact. It is obvious that \(\{ \widetilde {S}_{n}^{Y}(\cdot)/n\not \in A\}=\emptyset \), because \( \widetilde {S}_{n}^{Y}(0)/n=0\). Next, we show that $$\mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\in K_{2}^{c}\right)\le \mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\in A^{c}\right)+\sum_{k=1}^{\infty}\mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\in A_{k}^{c}\right). $$ Note that when δ<1/(2n), $$ \omega_{\delta}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\right)\le 2n |t-s|\max_{i\le n} |Y_{i}|/n \le 2 \delta \max_{i\le n} |Y_{i}|. $$ Choose a k 0 such that δ k <1/(2Mk) for k≥k 0. Then, on the event E={maxi≤n|Y i |≤M}, \(\{ \widetilde {S}_{n}^{Y}(\cdot)/n\in A_{k}^{c}\}=\emptyset \) for k≥k 0. So, by the (finite) sub-additivity of \(\mathbb {V}\), $$\begin{aligned} &\mathbb{V}\left(E \bigcap \left\{ \widetilde{S}_{n}^{Y}(\cdot)/n\in K^{c}\right\}\right)\\ \le & \mathbb{V}\left(E \bigcap\left\{ \widetilde{S}_{n}^{Y}(\cdot)/n \in A^{c}\right\}\right)+\sum_{k=1}^{k_{0}}\mathbb{V}\left(E\bigcap \left\{ \widetilde{S}_{n}^{Y}(\cdot)/n \in A_{k}^{c}\right\}\right) \\ \le & \mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n \in A^{c}\right)+\sum_{k=1}^{\infty}\mathbb{V} \left(\widetilde{S}_{n}^{Y}(\cdot)/n \in A_{k}^{c}\right). \end{aligned} $$ $$\mathbb{V}(E^{c})\le \frac{\hat{\mathbb{E}}[\max_{i\le n} |Y_{i}|]}{M}\le \frac{n\hat{\mathbb{E}}[|Y_{1}|]}{M}. $$ $$\begin{aligned} \mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\in K_{2}^{c} \right) \le \mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n \in A^{c}\right)+\sum_{k=1}^{\infty}\mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n \in A_{k}^{c}\right)+ \frac{n \hat{\mathbb{E}}[|Y_{1}|]}{M}. \end{aligned} $$ Letting M→∞ yields $$\begin{aligned} \mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n\in K_{2}^{c} \right) & \le \mathbb{V}(\widetilde{S}_{n}^{Y}(\cdot)/n \in A^{c})+\sum_{k=1}^{\infty}\mathbb{V}\left(\widetilde{S}_{n}^{Y}(\cdot)/n \in A_{k}^{c}\right)\\ &< 0+\sum_{k=1}^{\infty} \frac{\eta}{2^{k+1}}<\frac{\eta}{2}. \end{aligned} $$ We conclude that for any η>0, there exists a compact K 2⊂C b (C[0,1]) such that $$ \sup_{n} \hat{\mathbb{E}}^{\ast}\left[I\left\{\frac{\widetilde{S}_{n}^{Y}(\cdot)}{n}\not\in K_{2}\right\}\right]=\sup_{n} \mathbb{V}\left \{\frac{\widetilde{S}_{n}^{Y}(\cdot)}{n}\not\in K_{2}\right\}<\eta/2. $$ Next, we show that for any η>0, there exists a compact K 1⊂C b (C[0,1]) such that $$ \sup_{n} \hat{\mathbb{E}}^{\ast}\left[I\left\{\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\not\in K_{1}\right\}\right]=\sup_{n} \mathbb{V}\left \{\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\not\in K_{1}\right\}<\eta/2. $$ Similar to (21), it is sufficient to show that $$ \sup_{n}\mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\ge \epsilon\right)\rightarrow 0 ~\text{as}~\delta \rightarrow 0. $$ With the same argument of Billingsley (1968, Pages 56–59, cf. (8.12)), for large n, $$ \begin{aligned} &\mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\ge 3\epsilon\right) \le \frac{2}{\delta} \mathbb{V}\left(\max_{i\le [n\delta]} \frac{|S_{i}^{X}|}{\sqrt{[n\delta]}}\ge \epsilon \frac{\sqrt{n}}{\sqrt{[n\delta]}} \right) \\ \le &\frac{2}{\delta} \mathbb{V}\left(\max_{i\le [n\delta]} \frac{\left|S_{i}^{X}\right|}{\sqrt{[n\delta]}}\ge \frac{\epsilon }{\sqrt{2 \delta }} \right) \le \frac{4}{\epsilon^{2}}\hat{\mathbb{E}}\left[\left(\max_{i\le [n\delta]} \Big|\frac{S_{i}^{X}}{\sqrt{[n\delta]}}\Big|^{2}-\frac{\epsilon^{2} }{ 2 \delta }\right)^{+}\right]. \end{aligned} $$ $$ {\lim}_{\delta\rightarrow 0} \limsup_{n\rightarrow \infty} \mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\ge 3\epsilon\right)=0 $$ by Lemma 8 (a), where p=2. On the other hand, for fixed n, if δ<1/(2n), then $$ \omega_{\delta}(\widetilde{S}_{n}^{X}(\cdot)/\sqrt{n})\le 2n |t-s|\max_{i\le n} |X_{i}|/\sqrt{n} \le 2 \delta \sqrt{n} \max_{i\le n} |X_{i}|. $$ $$ {\lim}_{\delta\rightarrow 0} \mathbb{V}\left(w_{\delta}\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}}\right)\ge \epsilon\right)=0 $$ for each n. It follows that (23) holds. Now, by combining (21) and (22) we obtain the tightness of \(\widetilde {\boldsymbol W}_{n}\) as follows. $$ \sup_{n} \hat{\mathbb{E}}^{\ast}\Big[I\Big\{\widetilde{\boldsymbol W}_{n}(\cdot)\not\in K_{1}\times K_{2}\Big\}\Big]<\eta. $$ Define \(\hat {\mathbb {E}}_{n}\) by $$ \hat{\mathbb{E}}_{n}[\varphi]=\hat{\mathbb{E}}\Big[\varphi\big(\widetilde{\boldsymbol W}_{n}(\cdot)\big)\Big],\;\; \varphi\in C_{b}\big(C[0,1]\times C[0,1]\big). $$ Then, the sequence of sub-linear expectations \(\{\hat {\mathbb {E}}_{n}\}_{n=1}^{\infty }\) is tight by (24). By Theorem 9 of Peng (2010b), \(\{\hat {\mathbb {E}}_{n}\}_{n=1}^{\infty }\) is weakly compact, namely, for each subsequence \(\{\hat {\mathbb {E}}_{n_{k}}\}_{k=1}^{\infty }\), n k →∞, there exists a further subsequence \(\left \{\hat {\mathbb {E}}_{m_{j}}\right \}_{j=1}^{\infty } \subset \left \{\hat {\mathbb {E}}_{n_{k}}\right \}_{k=1}^{\infty }\), m j →∞, such that, for each φ∈C b (C[0,1]×C[0,1]), \(\{\hat {\mathbb {E}}_{m_{j}}[\varphi ]\}\) is a Cauchy sequence. Define \({\mathbb F}[\cdot ]\) by $$ {\mathbb F}[\varphi]={\lim}_{j\rightarrow \infty}\hat{\mathbb{E}}_{m_{j}}[\varphi], \; \varphi\in C_{b}\big(C[0,1]\times C[0,1]\big). $$ Let \(\overline {\Omega }=C[0,1]\times C[0,1]\), and (ξ t ,η t ) be the canonical process \(\xi _{t}(\omega) = \omega _{t}^{(1)}\), \(\eta _{t}(\omega)=\omega _{t}^{(2)}\left (\omega =\left (\omega ^{(1)},\omega ^{(2)}\right)\in \overline {\Omega }\right)\). Then, $$\hat{\mathbb{E}}\Big[\varphi\big(\widetilde{\boldsymbol W}_{m_{j}}(\cdot)\big)\Big]\rightarrow {\mathbb F}\left[\varphi(\xi_{\cdot},\eta_{\cdot})\right],\;\;\varphi\in C_{b}\big(C[0,1]\times C[0,1]\big). $$ The topological completion of \(C_{b}(\overline {\Omega })\) under the Banach norm \({\mathbb F}[\|\cdot \|]\) is denoted by \(L_{\mathbb F} (\overline {\Omega })\). \({\mathbb F}[\cdot ]\) can be extended uniquely to a sub-linear expectation on \(L_{\mathbb F} (\overline {\Omega })\). Next, it is sufficient to show that (ξ t ,η t ) defined on the sub-linear space \((\overline {\Omega }, L_{\mathbb F} (\overline {\Omega }), {\mathbb F})\) satisfies (i)-(v) and so \((\xi _{\cdot },\eta _{\cdot })\overset {d}=(B_{\cdot },b_{\cdot })\), which means that the limit distribution of any subsequence of \(\widetilde {\boldsymbol W}_{n}(\cdot)\) is uniquely determined. The conclusion in (i) is obvious. For (ii) and (iii), we let 0≤t 1≤…≤t k ≤s≤t+s. By (25), for any bounded continuous function \(\varphi :\mathbb R^{2(k+1)}\rightarrow \mathbb R\) we have $$\begin{aligned} & \hat{\mathbb{E}}\left[\varphi\big(\widetilde{W}_{m_{j}}(t_{1}),\ldots, \widetilde{W}_{m_{j}}(t_{k}), \widetilde{W}_{m_{j}}(s+t)-\widetilde{W}_{m_{j}}(s)\big)\right] \\ \rightarrow &{\mathbb F} \left[\varphi\big((\xi_{t_{1}},\eta_{t_{1}}), \ldots, (\xi_{t_{k}},\eta_{t_{k}}),(\xi_{s+t}-\xi_{s},\eta_{s+t}-\eta_{s})\big)\right]. \end{aligned} $$ $$\begin{aligned} & \sup_{0\le t\le 1}\frac{\left|\widetilde{S}_{n}^{X}(t)-S_{[nt]}^{X}\right|}{\sqrt{n}}\le \frac{\max_{k\le n}|X_{k}|}{\sqrt{n}}\overset{\mathbb{V}}\rightarrow 0,\\ &\sup_{0\le t\le 1}\frac{\left|\widetilde{S}_{n}^{Y}(t)-S_{[nt]}^{Y}\right|}{n}\le \frac{\max_{k\le n}|Y_{k}|}{n}\overset{\mathbb{V}}\rightarrow 0. \end{aligned} $$ It follows that by Lemmas 3 and 8, $$ \begin{aligned} & \hat{\mathbb{E}}\left[\varphi\left(\left(\frac{S_{[m_{j}t_{1}]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}t_{1}]}^{Y}}{m_{j}}\right),\ldots, \left(\frac{S_{[m_{j}t_{k}]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}t_{k}]}^{Y}}{m_{j}}\right),\right.\right.\\ &\ \ \ \ \ \ \ \ \ \ \ \left.\left. \left(\frac{S_{[m_{j}(s+t)]}^{X}-S_{[m_{j}s]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}(s+t)]}^{Y}-S_{[m_{j}s]}^{Y}}{m_{j}}\right)\right)\right] \\ & \qquad \rightarrow {\mathbb F} \left[\varphi\big((\xi_{t_{1}},\eta_{t_{1}}), \ldots, (\xi_{t_{k}},\eta_{t_{k}}),(\xi_{s+t}-\xi_{s},\eta_{s+t}-\eta_{s})\big)\right]. \end{aligned} $$ In particular, $$\begin{aligned} \left(\frac{S_{[m_{j}(s+t)]-[m_{j}s]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}(s+t)]-[m_{j}s]}^{Y}}{m_{j}}\right) \overset{d}=& \left(\frac{S_{[m_{j}(s+t)]}^{X}-S_{[m_{j}s]}^{X}}{\sqrt{m_{j}}}, \frac{S_{[m_{j}(s+t)]}^{Y}-S_{[m_{j}s]}^{Y}}{m_{j}}\right)\\ &\overset{d}\rightarrow \big(\xi_{s+t}-\xi_{s}, \eta_{s+t}-\eta_{s}\big). \end{aligned} $$ $$ \left(\frac{S_{[m_{j}t]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}t]}^{Y}}{m_{j}}\right) \overset{d}\rightarrow \big(\xi_{s+t}-\xi_{s}, \eta_{s+t}-\eta_{s}\big). $$ $$\left(\frac{S_{[m_{j}t]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}t]}^{Y}}{m_{j}}\right) \overset{d}\rightarrow \big(\xi_{t}, \eta_{t}\big), $$ by (26). Hence, $$ {\mathbb F}\left[\phi(\xi_{s+t}-\xi_{s},\eta_{s+t}-\eta_{s})\right]={\mathbb F}\left[\phi(\xi_{t},\eta_{t})\right]\;\; \text{for all}~\phi\in C_{b}\left(\mathbb R^{2}\right). $$ Next, we show that $$ {\mathbb F}[|\xi_{s+t}-\xi_{s}|^{p}]\le C_{p} t^{p/2}\; \text{and }\; {\mathbb F}[|\eta_{s+t}-\eta_{s}|^{p}]\le C_{p} t^{p},\;\;\text{for all}~p\ge 2~\text{and}~t,s \ge 0. $$ By Lemma 9, $$ \widetilde{\mathcal{V}}\big(t\underline{\mu}-\epsilon\le \eta_{s+t}-\eta_{s}\le t\overline{\mu}+\epsilon\big)=1\;\; \text{for all} \; \epsilon>0. $$ $${\mathbb F}[|\eta_{s+t}-\eta_{s}|^{p}]\le t^{p}\big|\hat{\mathbb{E}}[|Y_{1}|]\big|^{p}. $$ For considering ξ s+t −ξ s , we let \(\overline {S}_{n,k}^{X}\) and \(\widehat {S}_{n,k}^{X}\) be defined as in Lemma 7. Then, \(S_{k}^{X}=\overline {S}_{n,k}^{X}+ \widehat {S}_{n,k}^{X}\). By (27) and Lemmas 7 and 3, $$\frac{\overline{S}_{[m_{j}t],[m_{j}t]}^{X}}{\sqrt{m_{j}}} \overset{d}\rightarrow \xi_{s+t}-\xi_{s}\; \text{and }\; \hat{\mathbb{E}}\left[\left|\frac{\overline{S}_{[m_{j}t],[m_{j}t]}^{X}}{\sqrt{m_{j}}}\right|^{p}\right]\le C_{p} t^{p/2},\; p\ge 2. $$ $$ {\mathbb F}\left[| \xi_{s+t}-\xi_{s}|^{p}\wedge b\right]={\lim}_{n\rightarrow\infty}\hat{\mathbb{E}}\left[\left|\frac{\overline{S}_{[m_{j}t],[m_{j}t]}^{X}}{\sqrt{m_{j}}}\right|^{p}\wedge b\right]\le C_{p} t^{p/2}, \;\text{ for any}~b>0. $$ $$ {\mathbb F}\left[| \xi_{s+t}-\xi_{s}|^{p} \right]={\lim}_{b\rightarrow \infty} {\mathbb F}\left[| \xi_{s+t}-\xi_{s}|^{p}\wedge b\right]\le C_{p} t^{p/2} $$ by the completeness of \((\overline {\Omega }, L_{\mathbb F} (\overline {\Omega }), {\mathbb F})\). (29) is proved. Now, note that (X i ,Y i ),i=1,2,…, are independent and identically distributed. By (26) and Lemma 5, it is easily seen that (ξ ·,η ·) satisfies (14) for \(\varphi \in C_{b}(\mathbb R^{2(k+1)})\). Note that, by (29), the random variables concerned in (14) and (28) have finite moments of each order. The function space \(C_{b}(\mathbb R^{2(k+1)})\) and \(C_{b}(\mathbb R^{2})\) can be extended to \(C_{l,Lip}(\mathbb R^{2(k+1)})\) and \(C_{l,Lip}(\mathbb R^{2})\), respectively, by elemental arguments. So, (ii) and (iii) are proved. For (iv) and (v), we let \(\varphi :\mathbb R^{2}\rightarrow \mathbb R\) be a bounded Lipschitz function and consider $$ u(x,y,t)={\mathbb F}\left[\varphi(x+\xi_{t},y+\eta_{t})\right]. $$ It is sufficient to show that u is a viscosity solution of the PDE (13). In fact, due to the uniqueness of the viscosity solution, we will have $${\mathbb F}\left[\varphi(x+\xi_{t},y+\eta_{t})\right]=\widetilde{\mathbb E}\left[\varphi(x+\sqrt{t} \xi,y+t\eta)\right], \;\; \varphi\in C_{b,Lip}(\mathbb R^{2}). $$ Letting x=0 and y=0 yields (iv) and (v). To verify PDE (13), first it is easily seen that $$ \hat{\mathbb{E}}\left[\frac{q}{2} \left(\frac{S_{[nt]}^{X}}{\sqrt{n}}\right)^{2}+p \frac{S_{[nt]}^{Y}}{n}\right]=\frac{[nt]}{n}\hat{\mathbb{E}}\left[\frac{q}{2} \left(\frac{S_{[nt]}^{X}}{\sqrt{[nt]}}\right)^{2}+p \frac{S_{[nt]}^{Y}}{[nt]}\right]=\frac{[nt]}{n}G(p,q). $$ Note that \(\left \{\frac {q}{2} \left (\frac {S_{[nt]}^{X}}{\sqrt {n}}\right)^{2}+p \frac {S_{[nt]}^{Y}}{n}\right \}\) is uniformly integrable by Lemma 8. By Lemma 4, we conclude that $${\mathbb F}\left[\frac{q}{2}\xi_{t}^{2}+p\eta_{t}\right]={\lim}_{m_{j}\rightarrow \infty}\hat{\mathbb{E}}\left[\frac{q}{2} \left(\frac{S_{[m_{j}t]}^{X}}{\sqrt{m_{j}}}\right)^{2}+p \frac{S_{[m_{j}t]}^{Y}}{m_{j}}\right]=t G(p,q). $$ It is obvious that if q 1≤q 2, then G(p,q 1)−G(p,q 2)≤G(0,q 1−q 2)≤0. Also, it is easy to verify that \( |u(x,y,t)-u(\overline {x},\overline {y},t)|\le C (|x-\overline {x}|+|y-\overline {y}|) \), \( |u(x,y,t)-u(x,y,s)|\le C\sqrt {|t-s|}\) by the Lipschitz continuity of φ, and $$\begin{aligned} u(x,y,t)=&{\mathbb F}\left[\varphi(x+\xi_{s}+\xi_{t}-\xi_{s},y+\eta_{s}+\eta_{t}-\eta_{s})\right]\\ =& {\mathbb F}\left[ {\mathbb F} \left[\varphi(x+\overline{x}+\xi_{t}-\xi_{s},y+\overline{y}+\eta_{t}-\eta_{s})\right]\big|_{(\overline{x},\overline{y})=(\xi_{s},\eta_{s})}\right]\\ = & {\mathbb F}\left[u(x+\xi_{s},y+\eta_{s}, t-s)\right],\; 0\le s\le t. \end{aligned} $$ Let \(\psi (\cdot,\cdot,\cdot)\in C_{b}^{3,3,2}(\mathbb R,\mathbb R,[0,1])\) be a smooth function with ψ≥u and ψ(x,y,t)=u(x,y,t). Then, $${{\begin{aligned} 0=& {\mathbb F}\left[u(x+\xi_{s},y+\eta_{s}, t-s)-u(x,y,t)\right]\le {\mathbb F}\left[\psi(x+\xi_{s},y+\eta_{s}, t-s)-\psi(x,y,t)\right]\\ = & {\mathbb F}\left[\partial_{x}\psi(x,y,t)\xi_{s}+\frac{1}{2} \partial_{xx}^{2}\psi(x,y,t)\xi_{s}^{2}+\partial_{y}\psi(x,y,t)\eta_{s}-\partial_{t} \psi(x,y,t) s+I_{s}\right]\\ \le & {\mathbb F}\left[\partial_{x}\psi(x,y,t)\xi_{s}+\frac{1}{2} \partial_{xx}^{2}\psi(x,y,t)\xi_{s}^{2}+\partial_{y}\psi(x,y,t)\eta_{s}-\partial_{t} \psi(x,y,t) s\right]+ {\mathbb F}[|I_{s}|]\\ =& {\mathbb F}\left[\frac{1}{2} \partial_{xx}^{2}\psi(x,y,t)\xi_{s}^{2}+\partial_{y}\psi(x,y,t)\eta_{s}\right]-\partial_{t} \psi(x,y,t) s+ {\mathbb F}[|I_{s}|]\\ =& sG(\partial_{y}\psi(x,y,t),\partial_{xx}^{2}\psi(x,y,t)) -s\partial_{t} \psi(x,y,t)+ {\mathbb F}[|I_{s}|], \end{aligned}}} $$ $$|I_{s}|\le C\left(|\xi_{s}|^{3}+|\eta_{s}|^{2}+s^{2}\right). $$ By (29), we have \( {\mathbb F}[|I_{s}|]\le C\big (s^{3/2}+s^{2}+s^{2}\big)=o(s). \) It follows that \([\partial _{t} \psi - G(\partial _{y}\psi,\partial _{xx}^{2})](x,y,t)\le 0\). Thus, u is a viscosity subsolution of (13). Similarly, we can prove that u is a viscosity supersolution of (13). Hence, (15) is proved. As for (16), let \(\varphi :C[0,1]\times C[0,1]\rightarrow \mathbb R\) be a continuous function with |φ(x,y)|≤C 0(1+∥x∥p+∥y∥q). For λ>4C 0, let φ λ (x,y)=(−λ)∨(φ(x,y)∧λ)∈C b (C[0,1]). It is easily seen that φ(x,y)=φ λ (x,y) if |φ(x,y)|≤λ. If |φ(x,y)|>λ, then $$\begin{aligned} |\varphi(x,y)- & \varphi_{\lambda}(x,y)|=|\varphi(x,y)|-\lambda\le C_{0}(1+\|x\|^{p}+\|y\|^{q})-\lambda\\ \le & C_{0}\Big\{\Big(\|x\|^{p}-\lambda/(4C_{0})\Big)^{+} +\Big(\|y\|^{q}-\lambda/(4C_{0})\Big)^{+}\Big\}. \end{aligned} $$ $$|\varphi(x,y)-\varphi_{\lambda}(x,y)| \le C_{0}\Big\{\Big(\|x\|^{p}-\lambda/(4C_{0})\Big)^{+} +\Big(\|y\|^{q}-\lambda/(4C_{0})\Big)^{+}\Big\}.$$ $$\begin{aligned} &{\lim}_{\lambda\rightarrow \infty} \limsup_{n\rightarrow \infty}\left|\hat{\mathbb{E}}^{\ast}\Big[\varphi\Big(\widetilde{\boldsymbol W}_{n}(\cdot) \Big)\Big]- \hat{\mathbb{E}}\Big[\varphi_{\lambda}\left(\widetilde{\boldsymbol W}_{n}(\cdot) \right)\Big]\right| \\ \le & {\lim}_{\lambda\rightarrow \infty} \limsup_{n\rightarrow \infty} C_{0}\left\{ \hat{\!\mathbb{E}}\left[\!\left(\max_{k\le n}\left|\frac{S_{k}^{X}}{\sqrt{n}}\right|^{p}-\frac{\lambda}{4C_{0}}\right)^{+}\right]+\hat{\mathbb{E}}\left[\left(\max_{k\le n}\left|\frac{S_{k}^{Y}}{n}\right|^{q}-\frac{\lambda}{4C_{0}}\right)^{+}\right]\!\right\}\\ =&0, \end{aligned} $$ by Lemma 8. Further, by (15), $${\lim}_{n\rightarrow \infty}\hat{\mathbb{E}}\left[\varphi_{\lambda} \left(\widetilde{\boldsymbol W}_{n}(\cdot) \right)\right]= \widetilde{\mathbb E}\left[\varphi_{\lambda} \left(B_{\cdot},b_{\cdot}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\varphi \left(B_{\cdot},b_{\cdot}\right)\right]\;\; \text{as}~\lambda\rightarrow \infty. $$ (16) is proved, and the proof of Theorem 4 is now completed. □ When X k and Y k are d-dimensional random vectors, the tightness (24) of \(\widetilde {\boldsymbol W_{n}}(\cdot)\) also follows, because each sequence of the components of vector \(\widetilde {\boldsymbol W_{n}}(\cdot)\) is tight. Also, (29) remains true, because each component has this property. Moreover, it follows that $$ \begin{aligned} {\mathbb F}\left[\frac{1}{2}\left\langle A\xi_{t},\xi_{t}\right\rangle+\left\langle p,\eta_{t}\right\rangle\right] = & {\lim}_{m_{j}\rightarrow \infty} \hat{\mathbb{E}}\left[\frac{1}{2}\left\langle A\frac{S_{[m_{j}t]}^{X}}{\sqrt{m_{j}}},\frac{S_{[m_{j}t]}^{X}}{\sqrt{m_{j}}}\right\rangle+\left\langle p,\frac{S_{[m_{j}t]}^{Y}}{m_{j}}\right\rangle\right]\\ = & {\lim}_{m_{j}\rightarrow \infty} \frac{[m_{j}t]}{m_{j}} G(p,A)=t G(p,A). \end{aligned} $$ The remaining proof is the same as that of Theorem 4. □ Proof of the self-normalized FCLTs Let \(Y_{k}=X_{k}^{2}\). The function G(p,q) in (12) becomes $$G(p,q)=\hat{\mathbb{E}}\left[\left(\frac{q}{2} +p\right) X_{1}^{2}\right]=\left(\frac{q}{2} +p\right)^{+}\overline{\sigma}^{2}-\left(\frac{q}{2} +p\right)^{-}\underline{\sigma}^{2}, \;\; p,q\in \mathbb R. $$ Then, the process (B t ,b t ) in (15) and the process (W(t),〈W〉 t ) are identically distributed. In fact, note $$ \langle W\rangle_{t+s}-\langle W\rangle_{t}=(W(t+s)-W(t))^{2}-2\int_{0}^{s} (W(t+x)-W(t))d(W(t+x)-W(t)). $$ It is easy to verify that (W(t),〈W〉 t ) satisfies (i)-(iv) for (B ·,b ·). It remains to show that \((B_{1}, b_{1})\overset {d}= (W(1), \langle W\rangle _{1})\). Let {X n ;n≥1} be a sequence of independent and identically distributed random variables with \(X_{1}\overset {d}= W(1)\). Then, by Theorem 4, $$ \left(\frac{\sum_{k=1}^{n}X_{k}}{\sqrt{n}},\frac{\sum_{k=1}^{n} X_{k}^{2}}{n}\right)\overset{d}\rightarrow (B_{1}, b_{1}). $$ Further, let \(t_{k}=\frac {k}{n}\). Then, $$\left(\frac{\sum_{k=1}^{n}X_{k}}{\sqrt{n}},\frac{\sum_{k=1}^{n} X_{k}^{2}}{n}\right)\overset{d}= \left(W(1), \sum_{k=1}^{n} (W(t_{k})-W(t_{k-1}))^{2}\right)\overset{L_{2}}\rightarrow (W(1), \langle W\rangle_{1}). $$ Hence, \((B_{\cdot },b_{\cdot })\overset {d}=(W(\cdot), \langle W\rangle _{\cdot })\). We conclude the following proposition from Theorem 4. Suppose \(\hat {\mathbb {E}}[(X_{1}^{2}-b)^{+}]\rightarrow 0\) as b→∞. Then, for any bounded continuous function \(\psi :C[0,1]\times C[0,1]\rightarrow \mathbb R\), $$\hat{\mathbb{E}}\left[\psi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}},\frac{\widetilde{V}_{n} (\cdot)}{n}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\psi\Big(W(\cdot), \langle W \rangle_{\cdot} \Big) \right], $$ where \(\widetilde {V}_{n}(t)=V_{[nt]}+(nt-[nt])X^{2}_{[nt]+1}\), and, in particular, for any bounded continuous function \(\psi :C[0,1]\times \mathbb R\rightarrow \mathbb R\), $$ \hat{\mathbb{E}}\left[\psi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{n}},\frac{V_{n}}{n}\right)\right]\rightarrow \widetilde{\mathbb E}\Big[\psi\Big(W(\cdot), \langle W \rangle_{1}\Big) \Big]. $$ Now, we begin the proof of Theorem 2. Let \(a=\underline {\sigma }^{2}/2\) and \(b=2\overline {\sigma }^{2}\). According to (30), we have \(\widetilde {\mathcal {V}}\big (\underline {\sigma }^{2}-\epsilon < \langle W \rangle _{1}<\overline {\sigma }^{2}+\epsilon \big)=1\) for all ε>0. Let \(\varphi :C[0,1]\rightarrow \mathbb R\) be a bounded continuous function. Define $$ \psi\big(x(\cdot),y\big)=\varphi\left(\frac{x(\cdot)}{\sqrt{a\vee y\wedge b}}\right), \;\; x(\cdot)\in C[0,1],\;y\in \mathbb R. $$ Then, \(\psi :C[0,1]\times \mathbb R\rightarrow \mathbb R\) is a bounded continuous function. Hence, by Proposition 1, $$ \hat{\mathbb{E}}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)/ \sqrt{n}}{\sqrt{a\vee(V_{n}/n)\wedge b}}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\varphi\left(\frac{W(\cdot)}{\sqrt{a\vee (\langle W \rangle_{1})\wedge b}} \right) \right]=\widetilde{\mathbb E}\left[\varphi\left(\frac{W(\cdot)}{\sqrt{ \langle W \rangle_{1})}} \right) \right]. $$ $$\begin{aligned} \limsup_{n\rightarrow \infty} & \left|\hat{\mathbb{E}}^{\ast}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)/ \sqrt{n}}{\sqrt{ V_{n}/n }}\right)\right]-\hat{\mathbb{E}}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)/ \sqrt{n}}{\sqrt{a\vee(V_{n}/n)\wedge b}}\right)\right]\right|\\ &\le C \limsup_{n\rightarrow \infty} \mathbb{V}\left(V_{n}/n\not\in (a,b)\right)\\ &\le C\widetilde{\mathbb{V}}\left(\langle W \rangle_{1}\ge 3\overline{\sigma}^{2}/2\right) + C\widetilde{\mathbb{V}}\left(\langle W \rangle_{1}\le 2\underline{\sigma}^{2}/3\right)=0. \end{aligned} $$ $$ \hat{\mathbb{E}}^{\ast}\left[\varphi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{\sqrt{V_{n} }}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\varphi\left(\frac{W(\cdot)}{\sqrt{ \langle W \rangle_{1})}} \right) \right]. $$ The proof is now completed. □ First, note that $$\begin{aligned} \hat{\mathbb{E}}\left[X_{1}^{2}\wedge x^{2}\right]& \le \hat{\mathbb{E}}\left[X_{1}^{2}\wedge(kx)^{2}\right]\le \hat{\mathbb{E}}\left[X_{1}^{2}\wedge x^{2}\right]+k^{2}x^{2}\mathbb{V}(|X_{1}|>x),\;\; k\ge 1, \\ \hat{\mathbb{E}}\left[|X_{1}|^{r}\wedge x^{r}\right]& \le \hat{\mathbb{E}}\left[|X_{1}|^{r}\wedge (\delta x)^{r}\right]+\hat{\mathbb{E}}\left[(\delta x)^{r}\vee |X_{1}|^{r}\wedge x^{r}\right]\\ & \le \delta^{r-2} x^{r-2}l(\delta x)+x^{r} \mathbb{V}(|X_{1}|\ge \delta x), \;\;0<\delta<1,\; r>2. \end{aligned} $$ The condition (I) implies that l(x) is slowly varying as x→∞ and $$ \hat{\mathbb{E}}[|X_{1}|^{r}\wedge x^{r}]=o(x^{r-2}l(x)), \; r>2. $$ Further, $$ \frac{\hat{\mathbb{E}}^{\ast}[X_{1}^{2}I\{|X_{1}|\le x\}]}{l(x)}\rightarrow 1, $$ $$ C_{\mathbb{V}}\big(|X_{1}|^{r}I\{|X_{1}|\ge x\}\big)=\int_{x^{r}}^{\infty} \mathbb{V}(|X_{1}|^{r}\ge y)dy =o(x^{2-r} l(x)),\;\; 0<r<2. $$ If conditions (I) and (III) are satisfied, then $$ \hat{\mathbb{E}}[(|X_{1}|-x)^{+}]\le \hat{\mathbb{E}}^{\ast}[|X_{1}|I\{|X|\ge x\}] \le C_{\mathbb{V}}\big(|X_{1}|I\{|X_{1}|\ge x\}\big)=o(x^{-1} l(x)). $$ Now, let d t = inf{x:x −2 l(x)=t −1}. Then, \(nl(d_{n})=d_{n}^{2}\). Similar to Theorem 2, it is sufficient to show that for any bounded continuous function \(\psi :C[0,1]\times C[0,1]\rightarrow \mathbb R\), $$ \hat{\mathbb{E}}\left[\psi\left(\frac{\widetilde{S}_{n}^{X}(\cdot)}{d_{n}},\frac{\widetilde{V}_{n}(\cdot)}{d_{n}^{2}}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\psi(W(\cdot), \langle W\rangle_{\cdot})\right]\;\; \text{with}~W(1)\sim N(0,[r^{-2}, 1]). $$ Let \(\overline {X}_{k}=\overline {X}_{k,n}=(-d_{n})\vee X_{k}\wedge d_{n}\), \(\overline {S}_{k} =\sum _{i=1}^{k} \overline {X}_{i}\), \(\overline {V}_{k}=\sum _{i=1}^{k} \overline {X}_{i}^{2}\). Denote \( \overline {S}_{n}(t)=\overline {S}_{[nt]}+(nt-[nt])\overline {X}_{[nt]+1}\) and \(\overline {V}_{n}(t)=\overline {V}_{[nt]}+(nt-[nt])\overline {X}^{2}_{[nt]+1}\). Note $$\mathbb{V}\left(X_{k}\ne \overline{X}_{k}~\text{for some}~k\le n\right)\le n \mathbb{V}\left(|X_{1}|\ge d_{n}\right)=n\cdot o\left(\frac{l(d_{n})}{d_{n}^{2}}\right)=o(1). $$ It is sufficient to show that for any bounded continuous function \(\psi :C[0,1]\times C[0,1]\rightarrow \mathbb R\), $$ \hat{\mathbb{E}}\left[\psi\left(\frac{\overline{S}_{n}(\cdot)}{d_{n}},\frac{\overline{V}_{n}(\cdot)}{d_{n}^{2}}\right)\right]\rightarrow \widetilde{\mathbb E}\left[\psi(W(\cdot), \langle W\rangle_{\cdot})\right]. $$ Following the line of the proof of Theorem 4, we need only to show that for any 0<t≤1, $${} \limsup_{n\rightarrow \infty} \hat{\mathbb{E}}\left[\max_{k\le [nt]}\left|\frac{\overline{S}_{k}}{d_{n}}\right|^{p}\right]\le C_{p} t^{p/2},\;\; \limsup_{n\rightarrow \infty}\hat{\mathbb{E}}\left[\max_{k\le [nt]}\left|\frac{\overline{V}_{k}}{d_{n}^{2}}\right|^{p}\right]\le C_{p} t^{p},\;\; \forall p\ge 2; $$ $$ {\lim}_{n\rightarrow \infty} \hat{\mathbb{E}}\left[ \frac{q}{2} \left(\frac{\overline{S}_{[nt]}}{d_{n}}\right)^{2}+p \frac{\overline{V}_{[nt]}}{d_{n}^{2}}\right]=tG(p,q), $$ $$G(p,q)=\left(\frac{q}{2}+p\right)^{+} - r^{-2}\left(\frac{q}{2}+p\right)^{-}; $$ $${\kern-3.8cm} \max_{k\le n} \frac{|X_{k}|}{d_{n}}\overset{\mathbb{V}}\rightarrow 0. $$ In fact, (a) implies the tightness of \(\left (\frac {\widetilde {S}_{n}^{X}(\cdot)}{d_{n}},\frac {\widetilde {V}_{n}(\cdot)}{d_{n}^{2}}\right)\) and (29), and (b) implies the distribution of the limit process is uniquely determined. First, (c) is obvious, because $$\mathbb{V}\Big(\max_{k\le n}|X_{k}|\ge \epsilon d_{n}\Big)\le n \mathbb{V}\Big(|X_{1}|\ge \epsilon d_{n}\Big)=o(1) n \frac{l(\epsilon d_{n})}{\epsilon^{2} d_{n}^{2}} =o(1) n \frac{l(d_{n})}{d_{n}^{2}}=o(1). $$ As for (a), by the Rosenthal-type inequality (18), $$ {{\begin{aligned} &\hat{\mathbb{E}} \left[\max_{k\le [nt]}\left|\frac{\overline{S}_{k}}{d_{n}}\right|^{p}\right] \le C_{p}d_{n}^{-p}\left\{ [nt] \hat{\mathbb{E}}\left[|X_{1}|^{p}\wedge d_{n}^{p}\right]+\left([nt] \hat{\mathbb{E}}\left[|X_{1}|^{2}\wedge d_{n}^{2}\right]\right)^{p/2}\right.\\ & \quad + \left. \Big([nt] (\widehat{\mathcal{E}}[(-d_{n})\vee X_{1}\wedge d_{n}])^{+}+[nt] (\hat{\mathbb{E}}[(-d_{n})\vee X_{1}\wedge d_{n}])^{+}\Big)^{p}\right\}\\ & \quad \le C_{p}d_{n}^{-p}\left\{ [nt] \hat{\mathbb{E}}\left[|X_{1}|^{p}\wedge d_{n}^{p}\right]+\left([nt] \hat{\mathbb{E}}\left[|X_{1}|^{2}\wedge d_{n}^{2}\right]\right)^{p/2} + \left([nt] \hat{\mathbb{E}}\left[(|X_{1}|-d_{n})^{+}\right] \right)^{p}\right\}\\ & \quad \le C_{p}d_{n}^{-p}\left\{ [nt] o\left(d_{n}^{p-2}l(d_{n})\right) +\left([nt] l(d_{n})\right)^{p/2} + \left([nt] o\left(\frac{l(d_{n})}{d_{n}}\right) \right)^{p}\right\}\\ & \quad = o(1)[nt]\frac{l(d_{n})}{d_{n}^{2}}+\left(\frac{[nt]}{n} \right)^{p/2} \left(\frac{nl(d_{n})}{d_{n}^{2}}\right)^{p/2}+o(1)\left([nt] \frac{l(d_{n})}{d_{n}^{2}}\right)^{p}\le C_{p}t^{p/2}+o(1), \end{aligned}}} $$ and similarly, $$\begin{aligned} \hat{\mathbb{E}} \left[\max_{k\le [nt]}\left|\frac{\overline{V}_{k}}{d_{n}^{2}}\right|^{p}\right] & \le C_{p}d_{n}^{-2p}\left\{ [nt] \hat{\mathbb{E}}\left[|X_{1}|^{2p}\wedge d_{n}^{2p}\right]+\left([nt] \hat{\mathbb{E}}\left[|X_{1}|^{4}\wedge d_{n}^{4}\right]\right)^{p/2}\right.\\ &\quad + \left. \left([nt] \widehat{\mathcal{E}}\left[ X_{1}^{2}\wedge d_{n}^{2}\right] \right)+[nt] \left(\hat{\mathbb{E}}\left[ X_{1}^{2}\wedge d_{n}^{2}\right] \right)^{p}\right\}\\ & = o(1)+ C_{p} \left([nt] \frac{l(d_{n})}{d_{n}^{2}}\right)^{p}\le C_{p}t^{p}+o(1). \end{aligned} $$ Thus (a) follows. As for (b), note $$\begin{aligned} \frac{q}{2} \left(\frac{\overline{S}_{[nt]}}{d_{n}}\right)^{2}+p \frac{\overline{V}_{[nt]}}{d_{n}^{2}} =\left(\frac{q}{2}+p\right)\frac{\overline{V}_{[nt]}}{d_{n}^{2}}+q\frac{\sum_{k=1}^{[nt]-1}\overline{S}_{k-1}\overline{X}_{k}}{d_{n}^{2}}. \end{aligned} $$ By (32), $$\begin{aligned} \hat{\mathbb{E}}\left[ \sum_{k=1}^{[nt]-1}\overline{S}_{k-1}\overline{X}_{k} \right]\le& \sum_{k=1}^{[nt]-1}\hat{\mathbb{E}}\left[\overline{S}_{k-1}\overline{X}_{k} \right]\\ \le & \sum_{k=1}^{[nt]-1}\left\{\hat{\mathbb{E}}\left[\left(\overline{S}_{k-1}\right)^{+}\right]\hat{\mathbb{E}}\left[\overline{X}_{k}\right]-\hat{\mathbb{E}}\left[(\overline{S}_{k-1})^{-}\right]\widehat{\mathcal{E}}\left[\overline{X}_{k}\right]\right\}\\ \le & \sum_{k=1}^{[nt]-1} \left(\hat{\mathbb{E}}\left[| \overline{S}_{k-1}|^{2}\right]\right)^{1/2}\hat{\mathbb{E}}\left[(|X_{1}|-d_{n})^{+}\right]\\ =& O\left(\left(d_{n}^{2}\right)^{1/2}\right)\cdot n\hat{\mathbb{E}}\left[\left(|X_{1}|-d_{n}\right)^{+}\right]\\ =& O(d_{n})\cdot n\cdot o\left(\frac{l(d_{n})}{d_{n}}\right)=o\left(d_{n}^{2}\right), \end{aligned} $$ $$ \hat{\mathbb{E}}\left[- \sum_{k=1}^{[nt]-1}\overline{S}_{k-1}\overline{X}_{k} \right]=o\left(d_{n}^{2}\right). $$ $$ \frac{\hat{\mathbb{E}}\left[V_{[nt]}\right]}{d_{n}^{2}}=\frac{[nt]\hat{\mathbb{E}}\left[X_{1}^{2}\wedge d_{n}^{2}\right]}{d_{n}^{2}}=\frac{[nt]}{n}\frac{nl(d_{n})}{d_{n}^{2}}=\frac{[nt]}{n}\rightarrow t $$ $$ \frac{\widehat{\mathcal{E}}\left[V_{[nt]}\right]}{d_{n}^{2}}= \frac{[nt]\widehat{\mathcal{E}}\left[X_{1}^{2}\wedge d_{n}^{2}\right]}{d_{n}^{2}}=\frac{[nt]}{n}\frac{\widehat{\mathcal{E}}\left[X_{1}^{2}\wedge d_{n}^{2}\right] }{\hat{\mathbb{E}}\left[X_{1}^{2}\wedge d_{n}^{2}\right]}\rightarrow t r^{-2}. $$ Hence, we conclude that $$ \begin{aligned} \hat{\mathbb{E}}& \left[\frac{q}{2} \left(\frac{\overline{S}_{[nt]}}{d_{n}}\right)^{2}+p \frac{\overline{V}_{[nt]}}{d_{n}^{2}}\right] = \hat{\mathbb{E}}\left[\left(\frac{q}{2}+p\right)\frac{\overline{V}_{[nt]}}{d_{n}^{2}}\right]+o(1)\\ &\quad = t \left[ \left(\frac{q}{2}+p\right)^{+} - r^{-2}\left(\frac{q}{2}+p\right)^{-}\right] +o(1). \end{aligned} $$ Thus, (b) is statisfied, and the proof is completed. □ Csörgő, M, Szyszkowicz, B, Wang, QY: Donsker's theorem for self-normalized partial sums processes. Ann. Probab. 31, 1228–1240 (2003). Denis, L, Hu, MS, Peng, SG: Function spaces and capacity related to a sublinear expectation: application to G-Brownian Motion Paths. Potential Anal. 34, 139–161 (2011). arXiv:0802.1240v1 [math.PR]. Giné, E, Götze, F, Mason, DM: When is the Student t-statistic asymptotically standard normal?. Ann. Probab. 25, 1514–1531 (1997). Hu, MS, Ji, SL, Peng, SG, Song, YS: Backward stochastic differential equations driven by G-Brownian motion. Stochastic Process. Appl. 124(1), 759–784 (2014a). Hu, MS, Ji, SL, Peng, SG, Song, YS: Comparison theorem, Feynman-Kac formula and Girsanov transformation for BSDEs driven by G-Brownian motion. Stochastic Process. Appl. 124(2), 1170–1195 (2014b). Li, XP, Peng, SG: Topping times and related Ito's calculus with G-Brownian motion. Stochastic Process. Appl. 121(7), 1492–1508 (2011). Nutz, M, van Handel, R: Constructing sublinear expectations on path space. Stochastic Process. Appl. 123(8), 3100–3121 (2013). Peng, SG: G-expectation, G-Brownian motion and related stochastic calculus of Ito's type. In: The Abel Symposium 2005, Abel Symposia 2, Edit. Benth et. al, pp. 541–567. Springer-Verlag (2006). Peng, SG: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stochastic Process. Appl. 118(12), 2223–2253 (2008a). Peng, SG: A new central limit theorem under sublinear expectations (2008b). Preprint: arXiv:0803.2656v1 [math.PR]. Peng, SG: Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci. China Ser. A. 52(7), 1391–1411 (2009). Peng, SG: Nonlinear Expectations and Stochastic Calculus under Uncertainty (2010a). Preprint: arXiv:1002.4546 [math.PR]. Peng, SG: Tightness, weak compactness of nonlinear expectations and application to CLT (2010b). Preprint: arXiv:1006.2541 [math.PR]. Yan, D, Hutz, M, Soner, HM: Weak approximation of G-expectations. Stochastic Process. Appl. 122(2), 664–675 (2012). Zhang, LX: Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm. Commun. Math. Stat. 3(2), 187–214 (2015). arXiv:1503.02845 [math.PR]. Zhang, LX: Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016). Research supported by Grants from the National Natural Science Foundation of China (No. 11225104), the 973 Program (No. 2015CB352302) and the Fundamental Research Funds for the Central Universities. All authors have equal contributions to the paper. All authors read and approved the final manuscript. School of Mathematical Sciences, Zhejiang University, Hangzhou, 310027, China Zhengyan Lin & Li-Xin Zhang Zhengyan Lin Li-Xin Zhang Correspondence to Li-Xin Zhang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Lin, Z., Zhang, LX. Convergence to a self-normalized G-Brownian motion. Probab Uncertain Quant Risk 2, 4 (2017). https://doi.org/10.1186/s41546-017-0013-8 Sub-linear expectation G-Brownian motion Central limit theorem Self-normalization AMS 2010 subject classifications
CommonCrawl
Learning & Behavior September 2011 , Volume 39, Issue 3, pp 245–258 | Cite as Pigeon and human performance in a multi-armed bandit task in response to changes in variable interval schedules Deborah Racey Michael E. Young Dennis Garlick Jennifer Ngoc-Minh Pham Aaron P. Blaisdell First Online: 06 March 2011 The tension between exploitation of the best options and exploration of alternatives is a ubiquitous problem that all organisms face. To examine this trade-off across species, pigeons and people were trained on an eight-armed bandit task in which the options were rewarded on a variable interval (VI) schedule. At regular intervals, each option's VI changed, thus encouraging dynamic increases in exploration in response to these anticipated changes. Both species showed sensitivity to the payoffs that was often well modeled by Luce's (1963) decision rule. For pigeons, exploration of alternative options was driven by experienced changes in the payoff schedules, not the beginning of a new session, even though each session signaled a new schedule. In contrast, people quickly learned to explore in response to signaled changes in the payoffs. Pigeon Human learning Associative learning Acquisition Direct interaction with the environment provides much of the information that informs subsequent actions. Rarely is choice made in the presence of perfect knowledge. In a multitude of domains, organisms begin by choosing almost blindly; what is learned about the environment varies according to which of the possibilities are experienced. The world often fails to reveal information about the utility of options not chosen—the route not taken, the career not selected, the product not purchased (Taleb, 2007). In a complex environment where options are many and/or variable, complete knowledge of prevailing contingencies may require very long-term exploration. Even after long experience with the prevailing contingencies, continued exploration of options with less utility may be necessary in order to adapt to change. Under similar conditions, what leads some choosers to exploit their knowledge of differential utility and others to explore their options? Continued exploration may be an adaptive behavior learned through experience with changing environments (Rakow & Miler, 2009; Stahlman, Roberts, & Blaisdell 2010; Stahlman, Young, & Blaisdell 2010), or it may be that imperfect knowledge maintains exploration so that responding to changing conditions is a side effect rather than an adaptation. A complete study of the trade-off between exploration and exploitation will require the use of choice environments in which more than two options are available (cf. Rakow & Miler, 2009). We examined this trade-off in the present project by investigating human and pigeon behavior in an eight-option task. In addition to contending with the real-world complexity related to large numbers of options, most species live in changing environments. Although researchers in foraging behavior have investigated decision-making mainly through familiar, stationary environments, such that the individuals are fully informed about the nature of the options (e.g., Lin & Batzli, 2002; Zach, 1979), there is increasing interest in how such information is acquired (e.g., Mettke-Hofmann, Wink, Winkler, & Leisler 2004; Plowright & Shettleworth, 1990). The introduction of environmental changes has often been used to study how animals gather information about their environment. We took an approach that was inspired by the study of reinforcement-learning algorithms as applied to machine learning (Koulouriotis & Xanthopoulos, 2008; Sutton & Barto, 1998). In its simplest form, reinforcement-learning analyses often use the multi-armed (or "n-armed") bandit task to evaluate various methods of distributing exploration and exploitation (e.g., Dimitrakakis & Lagoudakis, 2008; Sikora, 2008). This task provides an excellent platform to explore choice in stationary (with unchanging payoffs) and nonstationary (with changing payoffs) environments, and it has also been applied to the domains of human learning and cognition (e.g., Burns, Lee, & Vickers 2006; Plowright & Shettleworth, 1990), economics (e.g., Banks, Olson, & Porter 1997), marketing and management (e.g., Azoulay-Schwartz, Kraus, & Wilkenfeld 2004; Valsecchi, 2003), and math and computer science (e.g., Auer, Cesa-Bianchi, Freund, & Schapire 1995; Koulouriotis & Xanthopoulos, 2008). The multi-armed bandit task (MABT) usually involves choosing among multiple possible actions that lead to immediate reward and about which nothing is initially known. The MABT took its name from the "one-armed bandit," another term for the slot machine. Rather than the one arm of a slot machine, however, a MABT has n options. It can be thought of as a set of n slot machines, each with an independent payoff schedule. After each selection, the reinforcer is awarded based on an underlying schedule of reinforcement. A player must explicitly explore an environment in order to learn the expected payoffs for these n options, and then can later exploit this knowledge. In a four-armed bandit task similar to the one used in the present study, Steyvers, Lee, and Wagenmakers (2009) employed a Bayesian optimal-decision model derived from the softmax equation (Luce, 1963) to explore how humans balance exploration with exploitation. In addition, eight-stimulus arrays very similar to the one used in the present study have been used with nonhuman animals (Jensen, Miller, & Neuringer 2006) and humans (Rothstein, Jensen, & Neuringer 2008), and in both cases behavior came under the control of the prevailing contingencies. Thus, this MABT provides a decision task that is potentially both complex and challenging, yet at the same time simple enough that it can be used to study a wide range of decision-making in both humans and other animals. Exploration versus exploitation An arm pull is an action, and at any point an actor is expected to rely on an estimate of action values based on the sampling history with each option. Choosing the action with the highest estimated action value (the "greedy" action) is exploitation, because the actor is exploiting its current knowledge. If the actor chooses a nongreedy action, it is exploring—a behavior that potentially enhances overall knowledge by improving the estimate of a nongreedy option. Greedy actions allow the actor to maximize its chance of immediate reward for the very next action, but nongreedy actions may be preferable, in order to maximize long-term reward or value (i.e., they actually are greedy, but over an extended time horizon). Reward may be lower in the short term when exploring, but long-term value may be greater, since the actor may discover actions that are better than the current greedy action or that provide viable alternatives if the action with the long-run highest value is currently less profitable (due to molecular aspects of the payoff schedule in which an option's value is temporarily lower; e.g., for VI schedules) or later becomes unprofitable (due to molar changes in the payoff schedule; e.g., changing from a variable ratio 5 to variable ratio 50). Whether exploration or exploitation is best at any given choice point will depend on the expected changes in these payoffs, inter alia. For a nonstationary bandit task, option values change during the task by changing the underlying molar contingencies—as if the room full of slot machines were reprogrammed occasionally during the allotted time of play. Continued exploration is critical if an organism is to track and adapt to these changes. The machine-learning literature provides some guidance regarding methods for action selection appropriate to the bandit task. The greedy strategy may be used to solve stationary bandit problems, and it requires that every response be made to the option with the highest value (i.e., the richest reinforcement schedule). This strategy results in quick and complete preference for one option, which is precisely what should be avoided in a nonstationary environment. Alternatively, Luce's (1963) decision rule (often called softmax) describes a strategy that uses the expected rewards of the options to choose them probabilistically. In other words, it assigns the highest selection probability to the greedy option, but the rest of the remaining options are chosen according to their value estimates. The probability of choosing action a is $$ P\left( {actio{n_a}} \right) = \frac{{{e^{{\theta .valu{e_a}}}}}}{{{{\sum\limits_{{j = 1}}^n {{e^{{\theta .valu{e_j}}}}} }}}}, $$ where θ is the exploitation parameter, value i denotes the current estimated value for the ith action, and n is the number of possible actions. When θ is zero, exploitation is absent and exploration of alternatives is predicted to be maximal, such that each action is equiprobable. Higher values of θ result in higher levels of exploitation; the option with the highest action value (the greedy response) is selected more frequently as θ increases. At very high levels of θ, Luce's decision rule becomes indistinguishable from the greedy decision strategy. The inclusion of the θ parameter allows for adjustment of the levels of exploitation and exploration to describe a particular organism's behavior, depending on variables such as time, satiety, environmental uncertainty, and use by each species and subject and at each stage in learning. People and pigeons are not Turing machines, and their estimates of action values may be imperfect. Regardless, these action values may be based simply on an overall history with each option, such as the proportion of total responses to that option that have been reinforced, or by some more complex calculation. For example, these estimates may be weighted to more recent experience or sensitive to the changes in reinforcement probability over time that are inherent in VI schedules. For this study, we assumed these action values to be equal to the overall programmed likelihood of reinforcement represented by the VI schedule for each option. Thus, we operationally defined exploration as choosing a response that has a lower molar reinforcement rate. The present experiments examined both pigeon and human performance using a nonstationary MABT. Each species chose from among eight response options in order to provide a complex set of choices that would constrain the theoretical analysis. We were interested in testing three hypotheses. First, could Luce's decision rule be used to assess the balance between exploitation and exploration for pigeons and humans in our choice task? Second, would both species adaptively and quickly increase their level of exploratory behavior in response to environmental cues that signal a change in choice payoffs? For pigeons, each daily session began with a new set of choice payoffs, and thus an adaptively optimal pigeon would begin each session with maximal exploration and be unaffected by the previous day's programmed schedules. For people, a new session began every few minutes and was signaled by a discriminative cue at the top of the display that should prompt a sudden increase in exploration. Third, would exploration continue throughout a session, or would pigeons and people exhibit a higher level of exploitation later in the session, once differential choice value had been determined? A total of 6 experimentally naïve adult White Carneaux pigeons (Columba livia) participated in the experiment. The pigeons were individually housed in steel home cages with metal wire mesh floors in a vivarium, and a 12-h light:dark cycle was maintained. Testing was conducted 5–7 days/week during the light cycle. The pigeons were maintained at approximately 85% of their free-feeding weights, and were given free access to grit and water while in their home cages. Testing was conducted in a flat-black Plexiglas chamber (38 cm wide × 36 cm deep × 38 cm high). All stimuli were presented by computer on a color LCD monitor (NEC MultiSync LCD1550M) visible through a 23.2 × 30.5 cm viewing window in the middle of the front panel of the chamber. Pecks to the monitor were detected by an infrared touch screen (Carroll Touch, Elotouch Systems, Fremont, CA) mounted on the front panel. A 28-V houselight located in the ceiling of the box was used for illumination, except during time outs. A food hopper (Coulbourn Instruments, Allentown, PA) was located below the monitor with an access hole situated flush with the floor. All experimental events were controlled and data recorded by a Pentium III class computer (Dell, Austin, TX). A video card controlled the monitor using the SVGA graphics mode (800 × 600 pixels). Preliminary training The 6 pigeons were first trained to eat from the hopper in the chamber. Next, responses were autoshaped to a white disk that appeared in the center of the screen. Pecking to the disk resulted in the hopper rising for 3 s before lowering again. This was followed by a 60-s intertrial interval (ITI) before the next disk was displayed. Once the pigeon was consistently responding to the disk, training began. Bandit training The pigeons were presented with differently colored disks on the screen, with each disk approximately 2 cm in diameter. The disks were arranged in a circular array starting at the top of the screen, such that disks that were opposite each other were approximately 8 cm apart (see Fig. 1). This display was located so that the bottom of the lowest disk was 3 cm above the bottom edge of the screen. The colors used for the disks, from the left clockwise, were gray, light blue, red, yellow, pink, green, dark blue, and orange. The reward value given to a particular disk was fixed throughout the session, but the reward values were randomly redistributed across disks from one session to the next. Thus, in one session the values assigned to disks clockwise from the top may have been 6, 192, 12, 3, 384, 24, 48, and 96, but the distribution on the following session may have been 12, 192, 48, 6, 96, 384, 3, and 24. This redistribution of values was done at the beginning of each session. The relative positions of the colors were not changed from session to session. Throughout training and testing, sessions were 60 min long. The computer screen as it was presented to the pigeons. The disks were identified in the analysis by consecutive numbers in a clockwise direction, with the top disk being 0 Initial training consisted of assigning random ratios (RRs) to the disks, using the following probabilities of each peck being rewarded: .61, .37, .22, .14, .08, .05, .03, and .02. After 60 sessions of training, it became clear that the pigeons were showing strong biases to disks located in particular positions and were not pecking to disks located in other positions, even if they had the highest reward value. One possibility was that pecking to the disks was relatively cheap to the pigeons, so the difference in reward structure was not very tangible. Another factor was that pigeons tend toward maximization (i.e., high exploitation) on RR schedules by showing nearly exclusive responding for the option with the richest experienced payoff structure (Herrnstein & Loveland, 1975). To increase sensitivity to reward and to encourage exploration by temporarily decreasing the reward value of a disk, the reward structure was changed from a random ratio to a variable-interval schedule. The variable intervals used were 3, 6, 12, 24, 48, 96, 192, and 384 s and varied by up to ±50% of the scheduled interval (e.g., for VI 3, the interval varied between 1.5 and 4.5 s). After another 60 sessions, it was clear that the pigeons were still showing strong biases to disks located in particular positions. Shifting the color assignments revealed that the bias was based on location and not color. The pigeons completed 5 sessions in which only one disk from the display was shown, and the disk had a .61 probability of reward. In this situation, the pigeons did reliably peck to the disk, regardless of its color or position. The pigeons then completed 40 sessions with all eight disks present, one of which had a .61 probability of reward and seven of which had no reward. The pigeons still showed a strong bias to particular disk locations, even if the locations were not associated with reward in a given session. A final attempt to equalize the perceived reward value of the disks and encourage exploration involved presenting the pigeons again with all eight disks for 24 sessions. However, the reward schedule was made more extreme, with VIs of 3, 9, 27, 81, 243, 729, 2,187, and 6,561 s (with experienced intervals again varying up to ±50% of the scheduled interval). In addition, at the end of the 24 sessions, the disk that was most pecked was eliminated. For the subsequent 24 sessions, only the remaining seven disks were present, with the longest reward interval was not assigned to a disk. At the end of this set of 24 sessions, the most pecked disk was again eliminated along with the longest reward interval still being used. This procedure progressed until the pigeons were given 24 sessions with only the three least-pecked (by location) disks remaining. To keep the pigeons at 85% of free-feeding weight, a session was terminated once 300 rewards had been received during the session. For the test sessions, the pigeons were presented with all eight disks for 24 sessions with VIs of 3, 9, 27, 81, 243, 729, 2,187, and 6,561 s. Assignment of VI schedule to the disks varied daily. Only the data from this final set of testing sessions were analyzed. To analyze the data, we used two approaches. First, we will describe the frequency with which each disk was chosen as a function of its programmed payoff. This approach will provide a general assessment of the degree of control established by the reward structure. Second, we will provide an analytical assessment of the pigeons' exploratory behavior using Luce's decision rule (Luce, 1963). From a reinforcement-learning perspective, low θ values indicate that a chooser either has not learned the differential payoffs or has maintained high exploration despite the differential payoffs. However, a sudden decrease in θ (when responding is not a function of previous disk value) indicates that a chooser has recognized that the payoffs have changed, thus prompting an increase in exploratory behavior. The complicating factor in our analysis is that the programmed contingencies may not have been experienced equally by every organism. A pigeon may have undersampled a particular choice and thus obtained a biased estimate of its payoff. Pigeons frequently showed disk biases and failed to fully explore each of the options. Thus, in our second set of choice analyses for pigeons, we used disk location as an independent predictor of the best-fitting θ values and predicted lower θs (i.e., poor response differentiation as a function of payoff value) for less-preferred disks. To estimate behavioral differentiation, we used the following instantiation of Luce's decision rule: $$ P\left( {ke{y_i}} \right) = \frac{{{e^{{\theta .payof{f_i}}}}}}{{\sum\limits_{{j = 1}}^8 {{e^{{\theta .payof{f_j}}}}} }}, $$ in which payoff i is the logarithm of the inverse of the programmed VI. The equation generates eight probabilities, one for each of the eight disks, that sum to 1.0. To fit Luce's decision rule to behavior, we used nonlinear mixed-effects modeling and identified the maximum likelihood best-fitting parameter values (Cudeck & Harring, 2007; Davidian & Giltinan, 2003). Mixed-effects modeling is used to simultaneously generate estimates of parameter estimates for each subject and as a function of the independent variables (e.g., Laird & Ware, 1982; Pinheiro & Bates, 2004). This approach is superior to the two-stage approach, in which parameter estimates are derived independently for each subject and the estimates are used in a subsequent analysis, because the results of the first stage do not include information about uncertainty in the parameter estimates that are used in the second stage (Shkedy, Straetemans, & Molenberghs 2005). We examined changes in the maximum likelihood for θ in Eq. 1 across birds (random effect) as a function of our predictors (fixed effects). To apply Luce's decision rule, we needed to identify the best proxy for disk value (i.e., payoff). Preliminary analyses identified that an appropriate function mapping VI to value was the logarithm of the reinforcement rate (1/VI). The inverse translates the VI into an expected rate, so that higher values are associated with better schedules (Fig. 2 reveals that this transformed variable is a good proxy for the relative long-run probability that the pigeon was rewarded for choosing that disk). The log transformation produced a stronger fit than the untransformed reinforcement rates. Log (1/VI) and experienced reinforcement probability as a function of the programmed VI for the 6 pigeons Choice differentiation as a function of programmed payoffs When we examined the proportion of trials on which each disk was chosen by each pigeon, the pigeons showed a marked preference for disks with the richest programmed VI schedules (see Fig. 3, solid lines). One pigeon, Cosmo, showed a strong preference for the disk with the second-best payoff schedule. A closer examination of the pigeons' disk choices, however, revealed that despite our attempts to train out disk biases, the pigeons still showed general preferences for disks in the lower part of the display (see the peck location density plots shown in Fig. 4; these plots were produced using JMP's nonparametric bivariate density function; SAS Institute Inc., Cary, NC). For some pigeons, certain disks were so rarely sampled that these choices are not visible in our density plots. When these less-preferred disks were associated with high payoffs for a particular session, the pigeon rarely experienced the high value of these disks. Probability of choosing a disk associated with each programmed VI for Experiment 1 (pigeons), with the best-fitting Luce function superimposed (dashed lines) Peck density plots for each pigeon in Experiment 1. Only pecks on the disks are shown As a baseline of comparison, we initially ignored these disk biases and identified the best-fitting θ for Eq. 1 [using log (1/VI) as a proxy for payoff rate] as a function of 5-min trial block (1–12). The analysis revealed that the degree of response differentiation, θ, varied as a function of block, F(11, 8975) = 5.71, p < .0001, BIC = −4711, R 2 = .40. The maximum likelihood value of θ was .10 in Block 1, reached .32 by Block 3, peaked at .34 in Block 6, and steadily decreased toward .22 in Block 12. Thus, the pigeons tended to quickly differentiate the better disks among the choice alternatives, but as the session progressed, their behavior became increasingly undifferentiated. Interestingly, this behavior was highly correlated with the number of pecks produced throughout the session: Pecking was highest during Blocks 2–4 and then gradually fell throughout the session. By Block 12, responding averaged 28% of the peak rate of responding. It appears that as the pigeons' level of satiety increased, the motivation to differentiate among the payoff disks decreased, or the motivation to exploit abated. We have defined exploitation for this experiment as a response to the option with the richest VI schedule; thus, the decreases in θ later in the session indicate increased exploration/decreased exploitation. A molecular definition of exploitation would involve the choice of the response with the highest momentary probability of payoff. When payoffs are delivered by VI schedule, the longer it has been since a particular option has been chosen, the greater that probability is. The response option with the leanest overall VI schedule may be the richest at the moment, if enough time has passed since it was last chosen. If the increasing exploration of options later in a session were the result of pigeons learning to choose other options due to an increase in their momentary reinforcement rate, we would expect an increase in payoff rate to accompany it. This outcome did not occur. Figure 5 shows the proportion of responses reinforced for each trial block within a session and indicates that—with the exception of Estelle—decreases in differentiation were associated with decreases, not increases, in reinforcement. Proportions of responses that were rewarded as a function of trial block for each pigeon in Experiment 1 The consequences of our using a VI schedule are revealed in the likelihood of continuing to respond on a disk that has just been rewarded. Figure 6 (left column) shows a smoothed spline of the likelihood of returning to a disk as a function of time elapsed since it was last rewarded, for the disks assigned the three richest schedules. The figure reveals a temporary decrease immediately following reward for some pigeons, at least for the VI 3-s and 9-s disks. During this dip, the pigeons were more likely to choose another disk (an exploratory response) as a function of its relative payoff likelihood, as shown in Figure 3. Smoothed response likelihood plots for each pigeon or human participant, showing the relative likelihoods of choosing the VI 3-s disk (top row), the VI 9-s disk (second row), and the VI 27-s disk (bottom row) as a function of time since the disk's previous reinforcement. Pigeons are on the left, and humans are on the right. The x-axis scales differ for each figure; the scales range from 0 to 8 in the top graphs, 0 to 20 in the middle graphs, and 0 to 50 in the bottom graphs The predicted disk choices for each pigeon are shown superimposed on Figure 3. Luce's decision rule predicts that responding is a monotonic function of disk value, and thus the rule cannot account for the unusual data patterns observed in Cosmo when disk value was solely a function of programmed (not experienced) payoff. However, the other birds' behavior was well approximated by Eq. 2. Finally, we examined the degree to which disk value on a previous session lingered into the next session. In the first 5-min part of a session (Block 1), response likelihood was as much a function of a disk's value on the previous session [t(6) = 4.33, p < .01] as of its value for the current session [t(6) = 4.07, p < .01]. Over the next five blocks, the effect of a disk's previous value steadily decreased (ts of 2.52, 1.66, 1.28, and 0.51), whereas the effect of a disk's current value was maintained (ts = 4.54, 3.74, 3.88, and 4.55). Choice differentiation as a function of programmed payoffs and disk location Because some pigeons were not showing sufficient exploration of all eight response disks, using the programmed payoff in fitting Luce's decision rule is problematic. To incorporate the effect of disk location for individual birds, we assessed θ as a function of both trial block and disk location. The analysis revealed that the degree of response differentiation, θ, varied as a function of both block, F(11, 8968) = 3.74, p < .0001, and disk location, F(7, 8968) = 7.94, p < .0001, BIC = −5,048, R 2 = .46. A model that included an interaction produced a poorer fit, BIC = −4,510, indicating that it was overparameterized, and thus the interaction was not included in our analysis. The best-fitting θ values as a function of trial block and disk are shown in Figure 7 which shows the main effects of both block (line graph) and disk location (star plot). It is readily apparent that exploitation (i.e., behavioral differentiation as a function of disk payoff) peaks relatively early in a session and steadily decreases, paralleling our earlier analysis that did not include disk location as a predictor. It is also apparent that responses on disks in the upper right part of the display (disks 0, 1, and 2) produce weaker behavioral differentiation as a function of disk payoff (i.e., lower θs), confirming the behavioral patterns documented in the peck density plots of Figure 4. Although the fit was better for this analysis, the improvements were relatively minor. Best-fitting values of θ as a function of 5-min trial block (line graph) and disk location (star plot; the axis range is 0 to .5). Error bars represent ±1 standard error In an eight-armed bandit task, pigeons' disk choice was largely a function of the VI schedule associated with each disk. For 4 of the pigeons, their behavior was broadly consistent with that predicted by Luce's decision rule as applied to the programmed reinforcement rate [log (1/VI)], thus suggesting that the derived θ values are good estimates of the degree of exploitation exhibited by the pigeons. Pigeons did not demonstrate high degrees of exploration at the beginning of a session that was cued by session onset, but rather their low θ values were a result of behavior being heavily influenced by carryover from the prior session's disk values. Within 10 min, however, their responding was largely driven by the new reinforcement contingencies. Thus, increases in exploration were likely produced by adversity—only when preferred disks were no longer paying off at a high rate did the pigeons begin to explore other choices (see Gallistel, Mark, King, & Latham 2001, for an alternative interpretation of matching in nonstationary environments). Our pigeons, which were working for primary reinforcers, showed less exploitation as a session progressed. This change could have been due to an anticipated change in disk payoffs, but the evidence suggests that exploitation decreased due to an increase in satiety. Regardless of this pattern, we did not see high degrees of exploitation at any point in a session. Averaged across every session and trial block, no pigeon chose its preferred disk more than 45% of the time (see Fig. 3). When these results were averaged across sessions but broken down by trial blocks, no pigeon chose its preferred disk more than 55% of the time (not shown). The pigeons were not adopting greedy strategies in our nonstationary environment. Despite our attempts to eliminate disk biases, the birds continued to show location preferences that were independent of a disk's programmed reinforcement schedule. We attempted to incorporate these biases into our analysis as an independent factor that allowed less behavioral differentiation (lower θ values) for certain disk locations, but the fit was only marginally better. An alternative formulation that would retain Luce's decision rule would be to incorporate disk location into our estimates of value, thus making a disk's value a function of both its scheduled payoff and its location. Unfortunately, this approach would require a post hoc assessment of disk preferences for each bird. In our second experiment, we used a similar design to examine exploration versus exploitation in humans. We anticipated rapid changes in θ and fewer location preferences that were independent of payoffs. The literature on risky choice and risk perception suggests that people might be well adapted to identifying and responding to changes in payoffs for decisions under uncertainty (for a discussion of various examples, see Rakow & Miler, 2009). A total of 20 undergraduates (16 female, 4 male) at the University of California, Los Angeles (UCLA), received course credit for participating in the experiment. Testing was conducted on a notebook computer with a 38-cm (diagonal) color monitor set at 1,152 × 864 pixels. Participants used a mouse to guide a cursor around a screen, and a response was recorded every time the left mouse button was clicked. A built-in speaker was used to give auditory feedback when a reward was given. Before the experiment commenced, participants recorded their gender, age, ethnicity, and grade point average at UCLA. We told participants that they would be doing a test of intelligence, that they would be presented with eight differently colored disks on the screen, and that they would be required to click on the disks using the cursor (see Fig. 8). The instructions indicated that sometimes when they did this, a box at the bottom of the screen would light up with the text "Click for a point," at which point they should click the box to receive a point; their objective was to earn as many points as possible. Participants then completed a sample trial where the disks were present in an identical arrangement to that used for the pigeons (however, the particular disk color assignments differed from those used in Experiment 1). Clicking on any of the disks resulted in the box at the bottom lighting up. When participants clicked on the box, they heard the sound of a penny dropping, the box went dark, and they were awarded a point. The computer screen as it was presented to the human participants in Experiment 2. For the purpose of analysis, the disks were numbered consecutively in a clockwise direction, with the top disk being disk 0 Following the sample trial, participants completed six sessions. Each session was 6 min long. We used the same reward schedule that had been used with the pigeons: VIs of 3, 9, 27, 81, 243, 729, 2,187, and 6,561 s, with ±50% variation. The assignment of variable intervals to disks was constant within a session but was rearranged from session to session. The same rearrangement from session to session was used for each participant. Counters were provided at the top of the screen giving an indication of how many points had been collected in each session, and the appropriate counter was updated every time a point was collected. At the conclusion of each session, the participants needed to click on a button (not shown in Fig. 8) to start the next session. At the end of the fifth session, they were asked to type into the computer answers to the questions "What do you think was happening during the task?" "What strategy did you use to earn points?" "Within (not between) a given session, how did the colored discs differ from each other?" and "Was there a difference from one session to another? If so, what was the difference?" Following this, they were asked to do the final, sixth session. When we examined the proportion of trials on which each disk was chosen by each person, all of the participants showed a systematic relationship between the scheduled payoff rate and the likelihood of choosing the corresponding disk (see Fig. 9, solid lines). A closer examination of the participants' disk choices revealed no strong disk biases, unlike the strong biases observed for the pigeons (not shown due to the large number of participants). Every participant showed sufficient sampling of each disk; the participant showing the strongest bias still chose the least preferred disk 62 times (3% of total choices) across the six sessions. Actual probability of choosing each disk with the associated VI payoff rate for each participant in Experiment 2, with the best-fitting Luce fit superimposed (dashed lines) We initially analyzed the degree of response differentiation, θ, as a function of session (1–6) and blocked time within session (30-s Blocks 1–12) using Eq. 1 (Fig. 9 shows the data fits as dashed lines; Fig. 10 shows changes in θ as a function of session and block). The analysis revealed that the degree of response differentiation, θ, varied as a function of block, F(11, 11389) = 56.48, p < .0001, and session, F(11, 11389) = 4.48, p < .0001, with a Block × Session interaction, F(55, 11389) = 2.62, p < .0001, BIC = −27,010, R 2 = .89. The maximum likelihood value of θ increased steadily throughout a session, and did so more rapidly in the later sessions (unfilled symbols) than in the earlier sessions (filled symbols), with only the first session showing a prolonged and gradual increase in θ. Unlike the pigeons, there was no indication of a loss of control as each session progressed. Best-fitting values of θ as a function of 30-s trial block (1–12) and session (1–6, as indexed in the legend). The standard errors were approximately .035 As a consequence of the use of a VI schedule, most people showed a temporary decrease in the likelihood of choosing a disk after it was rewarded. Figure 6 (right column) shows the individual smoothed likelihood splines for each participant for the three richest schedules, and the vast majority of participants developed an aversion to returning to a disk that was just rewarded; the likelihood of returning to it was a function of its VI schedule. Thus, due to the temporary decrease in the efficacy of a recently rewarded response, participants were being encouraged to explore by sampling other disks. Finally, we examined the degree to which disk value on a previous session lingered into the next session. In the first 30 s of a session (Block 1), response likelihood was largely a function of a disk's value for the current session [t(19) = 7.75, p < .01], but there was a small, nonsignificant effect of the disk's value from the previous session [t(19) = 1.68, p = .11]. Over the next four blocks, the effect of a disk's previous value remained small (ts of 2.63, 0.78, 1.23, and 1.13) and was only significant in Block 2, whereas the effect of a disk's current value increased and leveled off (ts = 10.47, 10.87, 11.95, and 11.89). By the final block, performance was entirely a function of a disk's value for the current session [t(19) = 14.01, p < .01], with little effect of the disk's value for the previous session [t(19) = 0.99, p = .32]. The strategy reports were largely uninformative. Six of the participants reported that points earned was somehow a function of time or delay (the correct controlling variable), 2 reported that points were a function of the number of times chosen, 1 reported a complex geometrical relationship, and the remaining participants' reports were either vague or equivalent to reporting that they did not know. Sex, self-reported GPA, and self-reported strategy did not significantly predict the best-fitting value of θ, but our sample size was too small to identify all but the largest individual-difference effects (a prior study had found a weak negative correlation, r = −.09, between intelligence and exploratory behavior; Steyvers et al., 2009). In our eight-armed bandit task, human disk choice was largely a function of the VI schedule associated with each disk. Behavior was generally consistent with that predicted by Luce's decision rule as applied to the programmed reinforcement rate [log (1/VI)]. Exploration was high early in a session and was only weakly a function of a disk's previous value. This lack of carryover, accompanied by a high degree of exploration in the first block of a session (see Fig. 10), likely occurred because the transition from session to session was clearly demarcated for the participants (Fig. 8 shows the highlighting of the current session at the top of the screen). Thus, our human participants showed an adaptive increase in exploration in the presence of a signal that indicated a change in disk payoffs, unlike the pigeons in Experiment 1. Finally, like the pigeons, our human participants did not demonstrate a greedy strategy (see Fig. 9). Instead, they continued to explore other alternatives late in a session. Both pigeons and people produced response patterns that were often well modeled by Luce's (1963) decision rule. Although there were some exceptions (most notably the pigeon Cosmo in Exp. 1), these deviations may have been driven by differences in the programmed and experienced disk payoffs or by idiosyncratic strategies that we have not assessed. Additionally, neither species demonstrated greedy strategies in the nonstationary environments used in the present study. Whereas exclusive choice of the highest value disk would seem adaptive once a chooser has learned that disks only change their value across sessions, the use of a VI schedule likely contributed to higher exploration by producing a temporary decrease in the value of a disk (see Fig. 6). Given the clocked nature of a VI, a disk with a leaner schedule is more likely to be rewarded than a disk with a richer schedule if the lean disk has not been chosen in a long time. For example, consider the choice between a VI 3-s and a VI 9-s disk. If the VI 3-s disk was chosen 8 s into a session, it would have an average delay of 3 s until its next reward was available (i.e., 11 s into the session). By contrast, the VI 9-s disk would have an average delay of 1 s until its next reward was available (i.e., 9 s into the session). Thus, the adoption of an optimal fully informed strategy would cause a chooser to occasionally sample the leaner schedules as a function of the elapsed time since their last reinforcement. Both the pigeons' and people's behavior often demonstrated a temporary decrease in the likelihood of choosing a disk that was recently rewarded, along with a rapid increase soon after (Fig. 6). After a peak in likelihood, responding gradually fell, which is largely a result of responses to a disk eventually being rewarded, thus truncating the distribution. The greatest species differences involved (a) strong disk biases in the pigeons but not in people and (b) the weak carryover of disk value across sessions for people but the strong carryover for pigeons. The strong disk biases were quite intransigent in our pigeons. Even after extensive attempts to train out these biases, the pigeons still underexplored certain responses (see Fig. 4). We believe that there are two significant contributors to these biases. First, the upper disks may have required substantial effort to reach, thus reducing their value due to a high response cost (cf. Jensen et al., 2006). Second, the pigeons may have been content to satisfice, such that there was insufficient motivation to maximize their reward rate. Given that the response rate gradually abated later in the session, satiation may have reduced the incentive to identify the disk with the highest value. The second large species difference involved the fact that the pigeons' behavior early in a session was heavily influenced by the disk values from the previous session, whereas people showed little session-to-session carryover of value. This result is even more remarkable given the extensive experience that the pigeons had with daily changes during training (309 sessions) and testing (24 sessions), an ample opportunity to learn that disk value did not (except in rare instances) carry over across sessions. In contrast, our human participants received only 6 min of training before disk payoffs changed and yet showed little value carryover. Thus, the pigeons increased exploration largely in response to an experienced change in payoff rates, whereas people increased exploration when a discriminative cue dictated. The control over performance exerted by disk values from the prior session is striking when one considers that nonstationary procedures reveal strong constraints on the duration of working memory in the pigeon. Pigeon working memory has been found to last from tens of seconds, in delayed matching-to-sample procedures (e.g., Grant, 1976; White, Ruske, & Colombo 1996), to no more than 1 or 2 h, on open-field spatial search tasks (Spetch, 1990; Spetch & Honig, 1988). This stands in stark contrast to retention of correct responses in stationary procedures, which have been shown to last for months or years (e.g., Cook, Levison, Gillett, & Blaisdell 2005; Vaughan & Greene, 1984). Above-chance retention of disk values over a 24-h interval after only a single session of exposure has previously been reported in two-choice situations (e.g., Grace & McLean, 2006; Kyonka & Grace, 2008; Schofield & Davison, 1997). These studies involving between-session changes in reinforcement schedules reveal some lasting influence of the prior session's reinforcement contingencies at the beginning of the next session. To our knowledge, however, ours are the first results showing similar carryover effects on schedules involving more than two choice options. This suggests that pigeons acquired some memory for the distribution of values across multiple choice options from a single session, the influence of which persisted in the following session. We can only speculate that our task contained features that better tap into processes of long-term memory than have previous working memory procedures. Although our human participants showed adaptive increases in exploratory behavior at the beginning of a session, session onset was clearly signaled. It is not known how quickly people would increase their exploratory behavior if change was not signaled. Without a signaled change in schedule, any increase in exploration would likely be a function of the magnitude of the change in disk value and of which disks (e.g., those of previously high or low value) changed their value. If a low-value, and thus undersampled, disk suddenly became the richest option, a high exploiter would be slow to discover this change. In contrast, if a high-value, and thus heavily sampled, disk suddenly decreased in value (which was typically the case in the present experiments), this change would be apparent to both high and low exploiters. People's sudden increase in exploratory behavior at the onset of each session suggests a level of operant control that goes beyond merely responding to changes in the payoffs of the operanda. One possibility is that this result provides further evidence of behavioral variability as an operant (Neuringer, 2002; Page & Neuringer, 1985), but the rapidity with which our human participants responded suggests insufficient time for variability to have been reinforced during the confines of our experiment. Thus, people previously must have learned the utility of exploration in the face of a rapidly changing environment. Pigeons, on the other hand, may be better adapted to more stable environments that reward perseveration over flexibility. Although an actor always faces uncertainty about the utility of future actions, the randomness of events underlying this uncertainty extends from that conforming to well-understood linear-based Gaussian distributions to those best described by poorly understood nonlinear power laws (Taleb, 2007). It would be very interesting to understand how actors as diverse as humans and pigeons face action-making decisions in these vastly different types of stochastic contexts that characterize real-world situations. Given the importance of understanding choice and the common desire to optimize choice strategies in stationary and nonstationary environments, we hope that more researchers will consider spending less time exploiting the study of simple choice tasks with stationary payoffs, and instead allocate more effort toward exploring many-choice tasks in nonstationary environments (e.g., Davison & Baum, 2000; Ward & Odum, 2008). Auer, P., Cesa-Bianchi, N., Freund, Y., & Schapire, R. E. (1995). Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of Computer Science (pp. 322–331). Piscataway, NJ: IEEE Press.Google Scholar Azoulay-Schwartz, R., Kraus, S., & Wilkenfeld, J. (2004). Exploitation versus exploration: Choosing a supplier in an environment of incomplete information. Decision Support Systems, 38, 1–18.CrossRefGoogle Scholar Banks, J., Olson, M., & Porter, D. (1997). An experimental analysis of the bandit problem. Economic Theory, 10, 55–77.CrossRefGoogle Scholar Burns, N. R., Lee, M. D., & Vickers, D. (2006). Individual differences in problem solving and intelligence. Journal of Problem Solving, 1, 20–32.Google Scholar Cook, R. G., Levison, D. G., Gillett, S. R., & Blaisdell, A. P. (2005). Capacity and limits of associative memory in pigeons. Psychonomic Bulletin & Review, 12, 350–358.CrossRefGoogle Scholar Cudeck, R., & Harring, J. R. (2007). Analysis of nonlinear patterns of change with random coefficient models. Annual Review of Psychology, 58, 615–637.PubMedCrossRefGoogle Scholar Davidian, M., & Giltinan, D. M. (2003). Nonlinear models for repeated measurements: An overview and update. Journal of Agricultural, Biological, and Environmental Statstics, 8, 387–419.CrossRefGoogle Scholar Davison, M., & Baum, W. M. (2000). Choice in a variable environment: Every reinforcer counts. Journal of the Experimental Analysis of Behavior, 74, 1–24.PubMedCrossRefGoogle Scholar Dimitrakakis, C., & Lagoudakis, M. G. (2008). Rollout sampling approximate policy iteration. Machine Learning, 72, 157–171.Google Scholar Gallistel, C. R., Mark, T. A., King, A. P., & Latham, P. E. (2001). A rat approximates an ideal detector of changes in rates of reward: Implications for the law of effect. Journal of Experimental Psychology: Animal Behavior Processes, 27, 354–372.PubMedCrossRefGoogle Scholar Grace, R. C., & McLean, A. P. (2006). Rapid acquisition in concurrent chains: Evidence for a decision model. Journal of the Experimental Analysis of Behavior, 85, 181–202.PubMedCrossRefGoogle Scholar Grant, D. S. (1976). Effect of sample presentation time on long-delay matching in pigeons. Learning and Motivation, 7, 580–590.Google Scholar Herrnstein, R. J., & Loveland, D. H. (1975). Maximizing and matching on concurrent ratio schedules. Journal of the Experimental Analysis of Behavior, 24, 107–116.PubMedCrossRefGoogle Scholar Jensen, G., Miller, C., & Neuringer, A. (2006). Truly random operant responding: Results and reasons. In E. A. Wasserman & T. R. Zentall (Eds.), Comparative cognition: Experimental explorations of animal intelligence (pp. 459–480). New York: Oxford University Press.Google Scholar Koulouriotis, D. E., & Xanthopoulos, A. (2008). Reinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problems. Applied Mathematics and Computation, 196, 913–922.Google Scholar Kyonka, E. G. E., & Grace, R. C. (2008). Rapid acquisition of preference in concurrent chains when alternatives differ on multiple dimensions of reinforcement. Journal of the Experimental Analysis of Behavior, 89, 49–69.PubMedCrossRefGoogle Scholar Laird, N. M., & Ware, J. H. (1982). Random-effects models for longitudinal data. Biometrics, 38, 963–974.PubMedCrossRefGoogle Scholar Lin, Y. K., & Batzli, G. O. (2002). The cost of habitat selection in prairie voles: An empirical assessment using isodar analysis. Evolutionary Ecology, 16, 387–397.Google Scholar Luce, R. D. (1963). Detection and recognition. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical psychology (Vol. 1, pp. 103–189). New York: Wiley.Google Scholar Mettke-Hofmann, C., Wink, M., Winkler, H., & Leisler, B. (2004). Exploration of environmental changes relates to lifestyle. Behavioral Ecology, 10, 2004. Google Scholar Neuringer, A. (2002). Operant variability: Evidence, functions, and theory. Psychonomic Bulletin & Review, 9, 672–705.CrossRefGoogle Scholar Page, S., & Neuringer, A. (1985). Variability is an operant. Journal of Experimental Psychology: Animal Behavior Processes, 11, 429–452. doi: 10.1037/0097-7403.11.3.429.CrossRefGoogle Scholar Pinheiro, J. C., & Bates, D. M. (2004). Mixed-effects models in S and S-PLUS. New York: Springer.Google Scholar Plowright, C. M., & Shettleworth, S. J. (1990). The role of shifting in choice behavior of pigeons on a two-armed bandit. Behavioural Processes, 21, 157–178. doi: 10.1016/0376-6357(90)90022-8.CrossRefGoogle Scholar Rakow, T., & Miler, K. (2009). Doomed to repeat the successes of the past: History is best forgotten for repeated choices with nonstationary payoffs. Memory & Cognition, 37, 985–1000.CrossRefGoogle Scholar Rothstein, J. B., Jensen, G., & Neuringer, A. (2008). Human choice among five alternatives when reinforcers decay. Behavioural Processes, 78, 231–239. doi: 10.1016/j.beproc.2008.02.016.PubMedCrossRefGoogle Scholar Schofield, G., & Davison, M. (1997). Nonstable concurrent choice in pigeons. Journal of the Experimental Analysis of Behavior, 68, 219–232.PubMedCrossRefGoogle Scholar Shkedy, Z., Straetemans, R., & Molenberghs, G. (2005). Modeling anti-KLH ELISA data using two-stage and mixed effects models in support of immunotoxicological studies. Journal of Biopharmaceutical Statistics, 15, 205–223.PubMedCrossRefGoogle Scholar Sikora, R. T. (2008). Meta-learning optimal parameter values in non-stationary environments. Knowledge Based Systems, 2(8), 800–806.Google Scholar Spetch, M. L. (1990). Further studies of pigeons' spatial working memory in the open-field task. Animal Learning & Behavior, 18, 332–340.Google Scholar Spetch, M. L., & Honig, W. K. (1988). Characteristics of pigeons' spatial working memory in an open-field task. Animal Learning & Behavior, 16, 123–131.CrossRefGoogle Scholar Stahlman, W. D., Roberts, S., & Blaisdell, A. P. (2010). Effect of reward probability on spatial and temporal variation. Journal of Experimental Psychology: Animal Behavior Processes, 36, 77–91.PubMedCrossRefGoogle Scholar Stahlman, W. D., Young, M. E., & Blaisdell, A. P. (2010). Response variability in pigeons in a Pavlovian task. Learning & Behavior, 38, 111–118.CrossRefGoogle Scholar Steyvers, M., Lee, M. D., & Wagenmakers, E. (2009). A Bayesian analysis of human decision-making on bandit problems. Journal of Mathematical Psychology, 53, 168–179.CrossRefGoogle Scholar Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.Google Scholar Taleb, N. N. (2007). The black swan: The impact of the highly improbable. New York: Random House.Google Scholar Valsecchi, I. (2003). Job assignment and bandit problems. International Journal of Manpower, 24(7), 844–866.Google Scholar Vaughan, W., & Greene, S. L. (1984). Pigeon visual memory capacity. Journal of Experimental Psychology: Animal Behavior Processes, 10, 256–271. doi: 10.1037/0097-7403.10.2.256.CrossRefGoogle Scholar Ward, R. D., & Odum, A. L. (2008). Sensitivity of conditional-discrimination performance to within-session variation of reinforcer frequency. Journal of the Experimental Analysis of Behavior, 90, 301–311.PubMedCrossRefGoogle Scholar White, K. G., Ruske, A. C., & Colombo, M. (1996). Memory procedures, performance and processes in pigeons. Cognitive Brain Research, 3, 309–317. doi: 10.1016/0926-6410(96)00016-X.PubMedCrossRefGoogle Scholar Zach, R. (1979). Shell dropping: Decision-making and optimal foraging in northwestern crows. Behaviour, 68, 106–117.Google Scholar © Psychonomic Society, Inc. 2011 1.Western Carolina UniversityCullowheeUSA 2.Southern Illinois University at CarbondaleCarbondaleUSA 3.University of California at Los AngelesLos AngelesUSA Racey, D., Young, M.E., Garlick, D. et al. Learn Behav (2011) 39: 245. https://doi.org/10.3758/s13420-011-0025-7 First Online 06 March 2011 The Psychonomic Society
CommonCrawl
Research article | Open | Open Peer Review | Published: 31 May 2018 Non-completion of secondary education and early disability in Norway: geographic patterns, individual and community risks Arnhild Myhr1, Tommy Haugan2, Monica Lillefjell1 & Thomas Halvorsen3 School non-completion and early work disability is a great public health challenge in Norway, as in most western countries. This study aims to investigate how medically based disability pension (DP) among young adults varies geographically and how municipal socioeconomic conditions interact with non-completion of secondary education in determining DP risk. The study includes a nationally representative sample of 30% of all Norwegians (N = 350,699) aged 21–40 in 2010 from Statistic Norway's population registries. Multilevel models incorporating factors at the individual, neighbourhood and municipal levels were applied to estimate the neighbourhood and municipality general contextual effects in DP receipt, and detect possible differences in the impact of municipal socioeconomic conditions on DP risk between completers and non-completers of secondary education. A pattern of spatial clustering at the neighbourhood (ICC = 0.124) and municipality (ICC = 0.021) levels are clearly evident, indicating that the underlying causes of DP receipt have a systematic neighbourhood and municipality variation in Norway. Non-completion of secondary education is strongly correlated with DP receipt among those younger than 40. Socioeconomic characteristics of the municipality are also significantly correlated with DP risk, but these associations are conditioned by the completion of secondary education. Living in a socioeconomically advantageous municipality (i.e. high income, high education levels and low unemployment and social security payment rates) is associated with a higher risk of DP, but only among those who do not complete their secondary education. Although the proportion of DPs was equal in rural and urban areas, it is evident that young people living in urban settings are more at risk of early DP than their counterparts living in rural parts of the country when controlling for other risk factors. The association between school non-completion and DP risk varies between municipalities and local socioeconomic environments. The interplay between personal characteristics and the local community is important in DP risk among young adults, implying that preventive measures should be directed not only at the individual level, but also include the educational system and the local community. The proportion of young adults prematurely leaving the labour market due to disability pensions (DPs) has increased significantly during the last decade [1]. Recent statistics have shown that 1.8% of the Norwegian population between the ages of 18 and 29 receive DP, which is almost double since 2007 [2]. The leading reason for DP receipt among individuals below 40 years of age in Norway is mental illness [3]. Brage and Thune [3] attributed this increase in part to more precise diagnostics, changes in health status and growing requirements in the labour market. Early work-life exit among young people is a great public health challenge and a threat to the Nordic welfare state model, which depends on high employment rates [4]. Young adults who come to rely on social insurance benefits for most of their life course place a high socioeconomic burden on their society [5]. They also experience substantial lifetime consequences in terms of health and socioeconomic marginalisation [6, 7]. Increasing DP rates will, therefore, ultimately lead to a society with larger socioeconomic and health disparities. DP receipt is also an important area for scientific inquiry because DP is an indicator of society's health status as a whole, given that DP eligibility criteria are strictly medical [8]. Social factors present at different levels of society, at the individual, family, community and national levels, strongly influence the health of young people [9]. Individual factors related to early DP have been extensively investigated, showing a clear educational gradient with heavy clustering of DP among non-completers of secondary education [10, 11]. Low education achievement is associated with lower work participation, higher risk of long-term socioeconomic marginalisation [12,13,14,15], unemployment [16] and mental and physical health issues [9, 17, 18]. School non-completers are also far more likely to receive DP [10, 11, 19] or depend on other medical and non-medical public benefits early in life [10, 11, 20]. Moreover, numerous studies have shown that childhood adversities, such as parental disability and low socioeconomic status, are associated with physical and mental health problems [21,22,23,24,25,26,27], low educational achievement [28, 29] and work disability [10, 30, 31] later in life. A large body of research have linked area characteristics, both physical and social, to a range of health behaviours and health related outcomes [32,33,34,35]. Official statistics demonstrate, for instance, large geographical variations in DP recipient rates in Norway, and that certain structural (contextual) factors may partly explain this variation [2]. Nordic population studies have shown that the prevalence of DP correlates with municipal socioeconomic conditions, such as economic development, unemployment rate and education level [36,37,38]. Moreover, socioeconomically disadvantaged areas are associated with fewer health-promoting behaviours [33, 39], higher morbidity [33, 40, 41] and all-cause mortality [42]. A number of studies have examined the effect of local socioeconomic conditions on the incidence of DP receipt, but less attention has been paid to the variation in contextual risk across subgroups of the population. It is plausible that the socioeconomic context of the area may not equally affect health for all people and certain personal characteristics and features of the social environment may act as moderators [34, 43]. In other words, there may be statistical interactions between personal characteristics, features of the residential context and the health outcomes studied. According to the relative deprivation hypothesis, individuals who are disadvantaged, relative to others in a certain neighbourhood, will experience stress-inducing social comparisons, which may have adverse consequences for individual health [34, 44]. This study investigates how medically based DP among young adults varies geographically, and how municipal socioeconomic conditions interacts with non-completion of secondary education in determining DP risk. The specific aims of the study were: (i). to explore geographic distributions of non-completion of secondary education and DP among young adults in Norway; (ii). to assess how neighbourhood and municipality differences relate to DP risk in young adulthood; and. (iii). to examine whether municipal socioeconomic conditions interact with the association between school non-completion and risk of DP in young adulthood. This study builds on a 30% random sample, stratified by age, gender and municipality of residence of the entire Norwegian population aged 21–40 years in 2010 (N = 395,514), extracted from Statistic Norway's event database, FD-trygd [45]. These data are linked to the National Education Database (NUDB) through a unique 11-digit personal identification number assigned to all Norwegian citizens. Entitlement of DP was observed at the end of 2010, when respondents were between the ages 21 and 40. The main focus of this study centred on the mechanisms for exclusion from working life. Hence, individuals entitled to a DP due to cognitive abnormalities (N = 527) (mainly those with extensive cognitive disabilities), most of whom never achieve ordinary paid work, were excluded from the study. See Fig. 1 for inclusion and exclusion criteria for the present study. The final sample size was 350,699 individuals. The unique personal identification number allowed us to identify information about the individual's registered parents (or caregivers). We merged the dataset with census information on individuals' home municipalities, using macro statistics on demography, employment and economic development from the Norwegian Social Science Data (NSD) regional database. Flow chart of the participants in the present study who where included in the analysis. The proportion of eligible subjects with complete data is 88.7% The outcome variable Our dependent variable was whether the individual was registered as a DP recipient in the National Insurance Administration (FD-trygd database) at the end of the follow-up period in 2010. In Norway, the eligibility criteria for granting a DP is strictly medical, based on an assessment that a person's earning ability is permanently reduced by at least 50% due to illness, injury or disability. In addition, the applicants need to meet the following criteria: (1) be between the ages 16 and 67, (2) have been a member of the national insurance program for at least 3 years (all residents of the country are members) and (3) have undergone appropriate medical treatment and rehabilitation that might improve their earning ability. For each individual, we sourced information on age, gender, employment record and parental DP from the FD-trygd database. Parental DP is known to be associated with both low educational achievement and early DP [10, 30] and was, therefore, included as a covariate in the analysis. NUDB provided secondary education data on non-completion, defined as having not obtained a secondary education degree by age 21. The variable was used as both an explanatory and moderator variable in the final analysis. Official statistics [46] have shown that men dominate the 20–29 group of DP recipients, while women are overrepresented in the 30–39 age group. This study accounted for the bias this disparity could present by interacting age with gender in the analysis. In other words, the model reflected the effects of gender at different ages. Neighbourhood level The FD-trygd database provides information on neighbourhood of residence, which constituted our second level of analysis. We used the individual's recorded census enumeration district in 2010, which is the lowest geographical level for Norwegian population statistics, to identify their neighbourhoods [45]. The binary variable "rural" identifies the neighbourhood of residence as rural or urban. Urban settlements have clusters of homes where at least 200 people live within a distance of 50 m or less, while the rural areas have a lower population density than this threshold [47]. Municipality level The third unit of analysis comprised all the 430 Norwegian municipalities in 2010. Norway's municipalities are subject to several common national laws and regulations, which means that they represent relatively homogeneous and, therefore, comparable units. Our model included a spatial lag variable and a set of municipality characteristics that describe the socioeconomic conditions. The spatial lag variable is the mean of the age-adjusted DP rates in neighbouring municipalities and is included to account for spatial dependencies that may exist in the larger regional context. Education level, defined as the percentage of persons aged 20–39 who completed secondary education, and income (the average gross income for all municipal residents aged 17 years and above) were used to evaluate the importance of the municipal socioeconomic environment. The analysis also included the percentage of inhabitants aged 20–39 years receiving unemployment benefits and social security benefits, which reflect the socioeconomic environment more indirectly. The municipality characteristics enter the model as continuous (grand mean centred), 2010 census variables (except the income variable, which was only available for 2009), and are sourced from NSD's regional database. Statistical approach We investigated the relevance of the residential context as well as the association between municipal socioeconomic conditions and DP receipt in young adulthood and tested the hypothetical interactions using logistic multilevel models [48,49,50]. Individuals (level 1) are nested within neighbourhoods (level 2, N = 12,894), which are nested within municipalities (level 3, N = 430). Each of these contexts may condition individual level variation due to unmeasured factors. Hence, we fitted a three-level random intercept model by using maximum likelihood estimation [48,49,50] to distinguish the individual, neighbourhood and municipality sources of variation in DP receipt. The multilevel framework allows us to simultaneously examine the effects of group-level and individual-level predictors while also accounting for non-independence of observations (clustering) within higher-level units. We modelled the prediction of DP receipt in young adulthood in 10 steps. First, we estimated an "empty" model, only including a random intercept, which represents the variation in DP between the three initial levels. This allowed us to determine the impact of the neighbourhood and municipality context in DP receipt [51]. Models 2–4 in Table 2 included all the individual level variables. In Table 3, we extended the random intercept logit model for the relationship between school non-completion and DP risk to allow non-completion effect to vary across municipalities. Multilevel models with many random components are computationally demanding and, given our large dataset, such models became intractable. Thus, to keep the model simple, a two-level random slope model (i.e. individuals nested within municipalities) was fitted in order to examine whether the relationship between school non-completion and DP risk varies between municipalities. A likelihood ratio test (LR test) was used to compare the random intercept and the random slope model's goodness of fit. In the final steps, we included all the neighbourhood and municipality variables and adjusted for age, gender and parental DP receipt (Table 4). Models 2–5 added the interaction terms of non-completion of secondary education with the municipality variables: education level (Model 2), gross household income (Model 3), unemployment rate (Model 4) and the rate of social security payments (Model 5). To quantify the influence of neighbourhood and municipality of residence in DP receipt, we computed the median odds ratio (MOR) [51] and the intraclass correlation coefficients (ICCs) [50]. The MOR translates the area level variance on the log-odds scale and may, in our case, in a simplified way, be interpreted as the increased median odds for receiving a DP if an individual lived in another neighbourhood (or municipality) with a higher risk [52, 53]. Thus, the higher the MOR, the greater the contextual effects. The ICC expresses the correlation in the outcome (i.e. DP receipt) between two individuals taken randomly from the same neighbourhood (or municipality). By using the latent variable method [50, 54], which considers the variance from a standard logistic distribution (π2/3 = 3.29), we calculated the ICC with the following formula: $$ ICC=\frac{Var\left({V}_f\right)}{Var\left({V}_f\right)+\frac{\pi^2}{3}} $$ The percentage of proportional change in variance (PCV) at the respective levels quantifies the percentage of the variance of the empty model explainable by predictor variables inserted into the more complex models [55]. The model parameters were estimated by a mixed effects method using Stata/MP software (version 13). We used geographic information system (GIS) [56, 57] to explore and visualise the geographical patterns of school non-completion and DP receipt among young adults. Measures of global spatial correlation were calculated using the Global Moran's I statistic and local indicators of spatial association (LISA), which evaluates whether the pattern expressed (i.e. school non-completion and DP receipt) is clustered, dispersed or random [58, 59]. Finally, we used ArcGIS 10 for Desktop for the spatial analysis. Table 1 presents descriptive information for the individual and contextual variables among receivers and non-receivers of DP. At the end of 2010, a total of 7065 (2.0%) individuals were receiving DP, of which 83.2% (N = 5876) had not completed secondary education. Of those who received DP, 37% (N = 2615) had no previous employment records registered in the FD-trygd database, compared to 2.7% (N = 9191) in the non-receiving population. Table 1 Individual and community characteristics of young receivers and non-receivers of disability pension (DP) Spatial pattern of secondary education non-completion and DP rates among young adults The geographical distribution of secondary education non-completion (Fig. 2) and DP rates (Fig. 3) differs greatly among the Norwegian municipalities. The dropout rates in the 430 municipalities in Norway have a clear geographical pattern (Fig. 2), with high dropout rates in the northern and south-eastern regions, and low dropout rates in western Norway. The prevalence of DP among young adults varies from zero to 8.3%, with an average of 2.0% for the total country. The geographic distribution of non-completers of upper secondary education (percentages) among individuals aged 21–40 years (born between 1970 and 1989 in Norway, 2010 The geographic distributions of disability pensions (percentages) among young adults aged 21–40 years (born between 1970 and 1989 in Norway, 2010 Measuring the global spatial correlation with the Moran's I estimator revealed significant clustering in both school non-completion and DP rates, with correlations of .23 and .12 and z-scores of 13.8 and 7.6, respectively. However, comparing Fig. 2 and Fig. 3 revealed distinct geographical distributions of the two groups. The spatial patterns, especially in the northern region, showed a clear clustering of school leavers, but far less clustering of DP rates. The south showed concentrations of municipalities with high DP rates without high dropout rates, while western Norway showed low dropout rates and low DP rates. An analysis of local autocorrelation (LISA) for both non-completion and DP rates confirmed these patterns, as does Fig. 4, which shows the results from both analyses. Specifically, high non-completion rates cluster in much of northern Norway, while clustering of DP rates is very limited here. High DP rates cluster in the southern region, but high non-completion rates do not. Finally, the western region shows substantial overlap in low-rate clustering for both variables. Local indicators of spatial association (LISA) for secondary education non-completion rates and disability pension rates Parametric estimation The prevalence of early DP at the neighbourhood and municipality levels differs. In the first step, we estimated an "empty" model. With only the second and third random intercepts in our model, we found that the ICC are 0.124 and 0.021. In other words, model 1 (in Table 2) suggests that 12.4 and 2.1% of the variation in DP risk can be attributed to differences between neighbourhoods and municipalities, respectively. Table 2 shows the individual and parental covariates of DP receipt in young adulthood. Non-completion of secondary education is positively associated with DP receipt, and this association seems to have strengthened itself over the last two decades. The association between school non-completion and DP receipt is stronger for individuals born in the period 1985–1989 compared to their counterparts born between 1970 and 1974. However, our data do not capture DP receipt after the age of 21–25 for the 1985–89-cohort, which complicates the comparison between the cohorts. Being male, older than average or having parents (mother and/or father) who receive DP are all correlated with higher DP risk before age 40. The interaction term with age and gender are negative and statistical significant, indicating that the positive association between males and DP decreases over time. A complete lack of employment history is associated with the largest DP risk. Table 2 The impact of non-completion of secondary education and its interaction with period of birth on the probability of receiving disability pension (DP) In Table 3, we extended the random intercept logit model for the relationship between the probability of receiving DP and non-completion of secondary education to allow the impact of non-completion to vary across municipalities. The two-level random intercept model, which is nested in the random slope model, is rejected at the 5% significance level (using a likelihood ratio test), suggesting that the impact of school non-completion does vary between municipalities. Table 3 Parameter estimates and log-likelihood values for the random intercept and random slope logistic regression models Turning to the neighbourhood and municipality variables (Table 4), we found that rural settlement is associated with lower risk of DP in young adulthood. This corresponds well with the patterns observed in Fig. 3, where clusters of some of the country's highest DP rates are found in the densely populated areas of eastern and southern Norway. The spatial lag variable is positive and significant, indicating that early DP has a regional clustering effect. This confirms the clustering mapped in Fig. 3. Living in municipalities where neighbouring municipalities have high DP rates correlates with higher DP risk, even when adjusting for individual and municipal socioeconomic variables. Models 3–5 (in Table 4) suggest that there is co-variation between the municipal socioeconomic environment and the individual DP risk. However, these associations are conditioned by the completion of secondary education. In other words, the effect of the municipal socioeconomic environment changes dependent on whether or not the individual has completed secondary education. Among non-completers, advantageous municipal socioeconomic conditions, such as high income and education levels and low unemployment and social security payment rates, are all associated with higher DP risk. Table 4 The impact of non-completion of secondary education, municipal socioeconomic factors and their interactionsa on the probability of receiving disability pension (DP) Keeping other variables constant, the predicted effects of the municipal socioeconomic variables, such as education level, can be evaluated by adding together the municipal education level (percentages of inhabitants with secondary or tertiary education) and school non-completion term and their interactions after filling in for different levels of municipal education (i.e. percentages with secondary or tertiary education). Doing this reveals that the probability of receiving a DP among school non-completers increases with increasing level of municipal education level, whereas among school completers the probability is more or less constant (< 0.5%) regardless of education level (Fig. 5). Among non-completers residing in a municipality with 60% of inhabitants with secondary or tertiary education the risk is 3.5%, and in a municipality with 75% the risk has increased to about 5%. Predictive margins of school completers and non-completers predicting probability (Pr) of receiving DP by percentages of municipal residents with secondary or tertiary education This study examined how medically based DP among young adults varies geographically, and how municipal socioeconomic conditions interact with non-completion of secondary education in determining DP risk. Findings from the current study reinforce the relevance of the residential context in DP risk among young adults. In support of previous studies, we found that non-completers of secondary education are more likely to receive early DP than completers. Our parametric estimation, however, suggest that the association between school non-completion and DP receipt varies across municipalities. The key contributions of this study are related to the exploration of how different municipal socioeconomic conditions interact with non-completion of secondary education, to alter DP risk in young adulthood. Municipalities with high socioeconomic profiles are, in general, associated with both a lower risk of non-completion and DP receipt [36,37,38, 60]. But this association does not hold for all groups. We found that non-completion has a stronger association with DP in socioeconomically advantaged municipalities. In other words, living in a high-status municipality (i.e. high income, high education levels and low unemployment and social security payment rates) is associated with higher risk of DP among those who do not complete their secondary education. Spatial clustering of DPs, which is evident at the municipality level, can be interpreted in light of Wilson's [61] assumption that neighbourhood characteristics influence collective socialisation processes by shaping the type of role models youth are exposed to outside their homes. Interactions with community peers and adults shape the norms, values, aspirations and, ultimately, the behaviours of the residents. Hence, advantaged neighbourhoods, where most adults have attained advanced formal education and steady jobs, will foster behaviours and attitudes within the next generations that are conducive to success in both education and work. In less advantaged communities, where the share of the population participating in the labour force is low and the dependency on welfare benefits is high, the positive attitudes toward education and work career may be less common. Rege, et al. [62] suggest that being disability may have "contagion" effect in the community, meaning that the propensity to receive DP increases when many people around you also depend on DP. Community characteristics represent more than the sum of their parts. Socioeconomic factors at the level of individuals may fail to protect even the health of well-off people if they live in socioeconomically disadvantaged neighbourhoods [35], and socioeconomically privileged neighbourhoods may impose an added health risk to the marginalised. Not only did we find that municipal factors were correlated with young inhabitants' DP risk, but we also uncovered regional effects. The DP rates in neighbouring municipalities are significantly associated with DP risk, suggesting inter-municipal processes. Most municipalities are embedded within larger regional contexts, and common historical, political, economic, and cultural factors shape them. Todd [63] suggests that the inherited regional differences in social structure affect our practices and values, which, in turn, will dispose us to think and act institutionally. In western Norway, a region with traditional Christian orthodoxy [64], people strongly value education and express this value by attaining higher formal education. Official statistics show that the population of western Norway, in general, has better health and has the highest life expectancy in the country [65]. Thus, our finding of high secondary education completion rates and low DP prevalence in western Norway is not surprising. Like Markussen, et al. [66], we found that northern Norway has a higher non-completion rate than any other region. Yet, this region has no systematic clustering of high DP prevalence. Even outside of our focal age group, the education level here is relatively low and many employers have traditionally not required a secondary education. In line with previous Norwegian population studies [10, 11], we found a strong association between school non-completion and DP risk in young adulthood. This association may emerge from risk factors not directly modelled. Poor health in adolescence is, for example, strongly associated with school non-completion [67]. Such health problems may, indeed, lower the chances of finding a job and increase the probability of receiving DP in young adulthood. Our analysis, however, show that the relationship between school non-completion and DP risk varies between municipalities and that the municipal socioeconomic environment has a substantial impact on this relationship. Previous Nordic population studies have demonstrated a relationship between municipal socioeconomic conditions and DP prevalence [36,37,38]. What stands out in our study is our finding that living in municipalities with high education and income levels and low unemployment and social security benefit rates was associated with higher DP risk among non-completers. Advantageous socioeconomic conditions are generally associated with increased individual probability of both completing secondary education and successfully entering the labour market [32, 38]. Nevertheless, population and community characteristics can, indeed, interact with individual characteristics [68] and, have differing impacts across population groups. Non-completers in areas with a lower education level may, according to the relative deprivation hypothesis [34, 44], be less at risk than non-completers in more socioeconomically advanced areas. Mishra and Carleton [69] demonstrated that subjective feelings of relative deprivation are linked with poorer physical and mental health. Moreover, social distance, distrust and lack of cohesion between population groups often characterise communities with high material and social inequalities [70]. This may lead to higher stress levels, especially for those at the bottom of the social ladder, resulting in higher prevalence of stress-related morbidity and health risk behaviours [71, 72]. Moreover, young adults without a secondary school degree may face greater difficulties in the labour market in societies with ample access to highly qualified applicants. Disparities between workers' resources and structural features of the job market may negatively impact their health [73]. School non-completion may, in other words, be more detrimental and contribute to stronger health selection in socioeconomically advanced areas. Hence, municipalities with seemingly strong socioeconomic profiles pose added risks for disadvantaged young adults without a secondary school degree. Similar to Reime and Claussen [37], we found that rural settlement was associated with a lower risk of DP among young school leavers. The education level in rural parts of Norway is generally lower compared to more urban areas, with easier access to jobs not requiring a formal education. Based on the relatively high portion of young people receiving DP in Norway, one might question how DPs are granted. Disability benefits granted to young people are a substitute for lost income due to disability. In order to be entitled as a young disabled person, one must be under 26 years old upon becoming seriously and permanently ill, and said illness must be clearly documented by a medical doctor/specialist. The causes of the increasing proportion of disabled young people in Norway are, however, highly complex and unclear [3]. It is primarily mental illness that causes young people to become disabled. One important explanation is likely that it has become more difficult for young people with mental illnesses to obtain and retain employment [3]. Another explanation of this growth is tied to changes in social security schemes and expectations related to receiving valuable welfare schemes. As time-limited disability benefits were replaced with work-disbursement benefits in 2010, many were transferred to this new benefit. Today, about 70% of those who received temporary disability benefits have been granted DP [74]. A major strength of our study is the use of large, nationally representative registry data with multiple explanatory factors at the individual, family and neighbourhood levels linked to population-based municipal socioeconomic factors. The use of high quality, official longitudinal registry data covering almost the entire Norwegian population greatly minimises the risk of selection bias and random errors in our analyses. However, using such a large dataset introduces the risk of identifying significant, but inconsequential effects [75]. Although studies based on large sample size have many advantages, marginally significant effects observed in such studies typically mean that the predictive effect of the exposure is quite modest [75]. Moreover, a large dataset with multiple explanatory variables introduces the risk of over-adjusting for inter-level confounding, in that contextual factors might determine some of the individual level variables [76]. We limited the risk of over-adjustment by including only a small number of individual and family level predictors that previous research has shown to affect DP risk. Our study has several limitations. First, there are many methodological challenges in the analysis of neighbourhood contextual effects, such as identification of the appropriate boundaries [77], endogeneity [78], structural confounding [79] and excessive extrapolation in multilevel modelling [78]. An obvious challenge related to the estimation of neighbourhood effects in Norway is the major differences in population density between the different regions of the country. A 30% random and stratified sample of the population leads to a small number of study participants in a significant proportion of neighbourhoods. Moreover, the issue with selective residential mobility poses interpretational challenges. For instance, advantageous socioeconomic circumstances and healthier individuals tend to move to or remain in less deprived neighbourhoods [80]. Finally, an ideal study should include longitudinal explanatory data at multiple appropriate levels and allow the levels (i.e. the context) to change over time. However, our data do not allow us to control for this and, thus, prevent the adoption of this analytic framework. This study underlines the importance of completing secondary education in the prevention of medically based DP among young adults in modern society. However, the study also demonstrates the significance of the residential context and local socioeconomic environment in individual variation in DP receipt. Low educational achievement and DP receipt have several central determinants in common, but comparing the geographical distributions of non-completion and DP rates reveal regional divergence. The risk factors manifest themselves at different structural levels, and a risk measured at the individual level may have a different effect when evaluated at the municipal level. This creates divergence in the geographical distribution of non-completion and DP rates, and anything but a multilevel analysis would likely conflate these results. Moreover, this study suggests that the population under study largely defines the relationship between risk factors and early DP. Advantageous municipal socioeconomic conditions will, in general, increase both the individual probability of completing secondary education and successfully entering the labour market. However, among non-completers, these municipal conditions are associated with a higher risk of receiving DP in young adulthood. Furthermore, living in rural communities lowers the risk of early DP. The mostly rural northern Norway has the highest non-completion rates in the country without particularly high levels of DP. These communities offer relatively well-paid jobs in the maricultural and fishing industry that do not require high formal education. Young adults with no previous employment records have the highest risk of receiving DP. Young people who do not finish secondary education are more marginalised in societies that place a higher weight on formal education. As more students complete their education, the potential marginalisation and barriers into the job market for those who drop out increases. As our results suggest, environmental factors are important determinants of risk, and measures aimed at lowering DP rates will probably fail to reach their potential without an understanding of the risks posed by the local environment. Future efforts to promote social equality and successful transitions to adulthood with regard to work and health should focus on the interplay between the local community and individual factors. OECD. Sickness, disability and work: breaking the barriers: a synthesis of findings across OECD countries. Paris: OECD Publishing; 2010. https://doi.org/10.1787/9789264088856-en. Ellingsen J. Utviklingen i uføretrygd1 per 31. Mars 2017 [in Norwegian]. Oslo: Norwegian Labour and Welfare Administration (NAV); 2017. https://www.nav.no/no/NAV+og+samfunn/Statistikk/AAP+nedsatt+arbeidsevne+og+uforetrygd+-+statistikk/Uforetrygd/Uforetrygd+-+Statistikknotater Brage S, Thune O. Ung uførhet og psykisk sykdom [In Norwegian]. Oslo: Arbeid og Velferd: Norwegian Labour and Welfare Administration (NAV), 2015:37–49. Dølvik J, Fløtten T, Hippe J, Jordfald B. The Nordic model towards 2030. A new chapter? NordMod2030: Final report. Fafo-report, 2015. OECD. Mental health and work. Paeris: OECD Publishing; 2013. https://doi.org/10.1787/9789264178984-en Avendano M, Berkman LF. Labor markets, employment policies, and health. In: Berkman L, Kawachi I,Glymour M, eds. Social epidemiology Second ed. New York: Oxford University Press. 2014:182–233. Bartley M, Ferrie J, Montgomery SM. Health and labour market disadvantage: unemployment, non-employment, and job insecurity. Social determinants of health. New York: Oxford University Press; 2009. p. 78–96. Claussen B. Restricting the influx of disability beneficiaries by means of law: experiences in Norway. Scand J Public Health. 1998;26(1):1–7. Viner RM, Ozer EM, Denny S, et al. Adolescence and the social determinants of health. Lancet. 2012;379(9826):1641–1652. https://doi.org/10.1016/S0140-6736(12)60149-4. Gravseth HM, Bjerkedal T, Irgens LM, Aalen OO, Selmer R, Kristensen P. Life course determinants for early disability pension: a follow-up of Norwegian men and women born 1967-1976. Eur J Epidemiol. 2007;22(8):533–43. https://doi.org/10.1007/s10654-007-9139-9. De Ridder KA, Pape K, Johnsen R, Westin S, Holmen TL, Bjorngaard JH. School dropout: a major public health challenge: a 10-year prospective study on medical and non-medical social insurance benefits in young adulthood, the young-HUNT 1 study (Norway). J Epidemiol Community Health. 2012;66(11):995–1000. https://doi.org/10.1136/jech-2011-200047. Bäckman O, Jakobsen V, Lorentzen T, Österbacka E, Dahl E. Early school leaving in Scandinavia: extent and labour market effects. J Eur Soc Policy. 2015;25(3):253–69. Bäckman O, Jakobsen V, Lorentzen T, Österbacka E, Dahl E. Dropping out in Scandinavia social exclusion and labour market attachment among upper secondary school dropouts in Denmark, Finland, Norway and Sweden: Institute for Futures Studies; 2011. OECD. Education at a glance 2014: OECD Indicators: OECD Publishing. p. 2014. https://doi.org/10.1787/eag-2017-en Bäckman O, Nilsson A. Pathways to social exclusion—a life-course study. Eur Sociol Rev. 2010;27(1):107–23. https://doi.org/10.1093/esr/jcp064. Caspi A, Wright BRE, Moffitt TE, Silva PA. Early failure in the labor market: childhood and adolescent predictors of unemployment in the transition to adulthood. Am Sociol Rev. 1998;63(3):424–51. https://doi.org/10.2307/2657557. Marmot MG, Bell R. Fair society, healthy lives. Public Health. 2012;126(Suppl 1):S4–10. https://doi.org/10.1016/j.puhe.2012.05.014. Marmot MG, Wilkinson RG. Social determinants of health. Oxford: Oxford University Press; 2006. Myhr A, Haugan T, Espnes GA, Lillefjell M. Disability pensions among young adults in vocational rehabilitation. J Occup Rehabil. 2015;26(1): 95–102. https://doi.org/10.1007/s10926-015-9590-5. OECD. Society at a Glance 2016: OECD Social Indicators. Paris: OECD Publishing; 2016. https://doi.org/10.1787/9789264261488-en Amone-P'olak K, Burger H, Huisman M, Oldehinkel AJ, Ormel J. Parental psychopathology and socioeconomic position predict adolescent offspring's mental health independently and do not interact: the TRAILS study. J Epidemiol Community Health. 2011;65(1):57–63. https://doi.org/10.1136/jech.2009.092569. Boe T, Sivertsen B, Heiervang E, Goodman R, Lundervold AJ, Hysing M. Socioeconomic status and child mental health: the role of parental emotional well-being and parenting practices. J Abnorm Child Psychol. 2013;42(5): 705–15. https://doi.org/10.1007/s10802-013-9818-9. Boe T, Overland S, Lundervold AJ, Hysing M. Socioeconomic status and children's mental health: results from the Bergen child study. Soc Psychiatry Psychiatr Epidemiol. 2012;47(10):1557–66. https://doi.org/10.1007/s00127-011-0462-9. Galobardes B, Lynch JW, Smith GD. Is the association between childhood socioeconomic circumstances and cause-specific mortality established? Update of a systematic review. J Epidemiol Community Health. 2008;62(5):387–90. https://doi.org/10.1136/jech.2007.065508. Chen E, Martin AD, Matthews KA. Socioeconomic status and health: do gradients differ within childhood and adolescence? Soc Sci Med. 2006;62(9):2161–70. Repetti RL, Taylor SE, Seeman TE. Risky families: family social environments and the mental and physical health of offspring. Psychol Bull. 2002;128(2):330. Poulton R, Caspi A, Milne BJ, et al. Association between children's experience of socioeconomic disadvantage and adult health: a life-course study. Lancet. 2002;360(9346):1640–5. https://doi.org/10.1016/s0140-6736(02)11602-3. Davis-Kean PE. The influence of parent education and family income on child achievement: the indirect role of parental expectations and the home environment. J Fam Psychol. 2005;19(2):294. Myhr A, Lillefjell M, Espnes GA, Halvorsen T. Do family and neighbourhood matter in secondary school completion? A multilevel study of determinants and their interactions in a life-course perspective. PLoS One. 2017;12(2):e0172281. Pape K, Bjorngaard JH, De Ridder KA, Westin S, Holmen TL, Krokstad S. Medical benefits in young Norwegians and their parents, and the contribution of family health and socioeconomic status. The HUNT study, Norway. Scand J Public Health. 2013;41(5):455–62. https://doi.org/10.1177/1403494813481645. Harkonmaki K, Korkeila K, Vahtera J, et al. Childhood adversities as a predictor of disability retirement. J Epidemiol Community Health. 2007;61(6):479–84. https://doi.org/10.1136/jech.2006.052670. Nieuwenhuis J, Hooimeijer P, Meeus W. Neighbourhood effects on educational attainment of adolescents, buffered by personality and educational commitment. Soc Sci Res. 2015;50:100–9. Reijneveld SA. Neighbourhood socioeconomic context and self reported health and smoking: a secondary analysis of data on seven cities. J Epidemiol Community Health. 2002;56(12):935–42. https://doi.org/10.1136/jech.56.12.935. Stafford M, Marmot M. Neighbourhood deprivation and health: does it affect us all equally? Int J Epidemiol. 2003;32(3):357–66. https://doi.org/10.1093/ije/dyg084. Stafford M, McCarthy M. Neighbourhoods, housing, and health. In: Marmot MG, Wilkinson RG, editors. Social determinants of health. 2nd ed. Oxford: Oxford University Press; 2006. p. 297–317. Laaksonen M, Gould R. The effect of municipality characteristics on disability retirement. Eur J Public Health. 2013;24(1):116–21. Reime LJ, Claussen B. Municipal unemployment and municipal typologies as predictors of disability pensioning in Norway: a multilevel analysis. Scand J Public health. 2013;41(2):158–65. Krokstad S, Magnus P, Skrondal A, Westin S. The importance of social characteristics of communities for the medically based disability pension. Eur J Pub Health. 2004;14(4):406–12. https://doi.org/10.1093/eurpub/14.4.406. Kavanagh AM, Goller JL, King T, Jolley D, Crawford D, Turrell G. Urban area disadvantage and physical activity: a multilevel study in Melbourne, Australia. J Epidemiol Community Health. 2005;59(11):934–40. https://doi.org/10.1136/jech.2005.035931. Subramanian SV, Kawachi I, Kennedy BP. Does the state you live in make a difference? Multilevel analysis of self-rated health in the US. Soc Sci Med. 2001;53(1):9–19. https://doi.org/10.1016/S0277-9536(00)00309-9. Browning CR, Cagney KA. Neighborhood structural disadvantage, collective efficacy, and self-rated physical health in an urban setting. J Health Soc Behav. 2002;43(4):383–99. Marinacci C, Spadea T, Biggeri A, Demaria M, Caiazzo A, Costa G. The role of individual and contextual socioeconomic circumstances on mortality: analysis of time variations in a city of north West Italy. J Epidemiol Community Health. 2004;58(3):199–207. https://doi.org/10.1136/jech.2003.014928. Wilkinson RG. Ourselves and others - for better or worse: social vulnerability and inequality. In: Marmot MG, Wilkinson RG, editors. Social determinants of health. Oxford: Oxford University Press; 2006. p. 341–57. Wilkinson RG. Ourselves and others–for better or worse: social vulnerability and inequality. In: Marmot MG, Wilkinson RG, eds. Social Determinants of Health. Oxford Oxford University Press. 2006:341–57. Akselsen A, Lien S, Siverstøl Ø. FD-Trygd, list of variables. Oslo/Kongsvinger: Statistics Norway/Department of Social Statistics/Division for Social Welfare Statistics; 2007. Ellingsen J. Utviklingen i uføretrygd per 31. mars 2018 [in Norwegian]: The Norwegian Labour and Welfare Administration; 2018. https://www.nav.no/no/NAV+og+samfunn/Statistikk/AAP+nedsatt+arbeidsevne+og+uforetrygd+-+statistikk/Uforetrygd/ Statistics Norway. Population and area in urban settlements 2016 [In Norwegian]. [Official statistics ]. https://www.ssb.no/befolkning/statistikker/beftett/aar. Goldstein H. Multilevel statistical models. London: Arnold; 1995. Rabe-Hesketh S, Skrondal A., ed. Multilevel and longitudinal modeling using Stata, Volume II: Categorical responses, counts, and survival. Third ed. Texas: STATA press; 2012. Snijders TAB, Bosker RJ. Multilevel analysis: an introduction to basic and advanced multilevel modeling, second ed. London: Sage Publishers; 2012. Merlo J, Chaix B, Yang M, Lynch J, Råstam L. A brief conceptual tutorial of multilevel analysis in social epidemiology: linking the statistical concept of clustering to the idea of contextual phenomenon. J Epidemiol Community Health. 2005;59(6):443–9. Larsen K, Merlo J. Appropriate assessment of neighborhood effects on individual health: integrating random and fixed effects in multilevel logistic regression. Am J Epidemiol. 2005;161(1):81–8. Merlo J, Viciana-Fernández FJ, Ramiro-Fariñas D. Population RGotLDotA. Bringing the individual back to small-area variation studies: a multilevel analysis of all-cause mortality in Andalusia, Spain. Soc Sci Med. 2012;75(8):1477–87. Browne WJ, Subramanian SV, Jones K, Goldstein H. Variance partitioning in multilevel logistic models that exhibit overdispersion. J R Stati Soc: A (Stat Soc). 2005;168(3):599–613. Merlo J, Chaix B, Ohlsson H, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: using measures of clustering in multilevel logistic regression to investigate contextual phenomena. J Epidemiol Community Health. 2006;60(4):290–7. Chaix B, Leyland AH, Sabel CE, et al. Spatial clustering of mental disorders and associated characteristics of the neighbourhood context in Malmö, Sweden, in 2001. J Epidemiol Community Health. 2006;60(5):427–35. Chaix B, Merlo J, Subramanian S, Lynch J, Chauvin P. Comparison of a spatial perspective with the multilevel analytical approach in neighborhood studies: the case of mental and behavioral disorders due to psychoactive substance use in Malmö, Sweden, 2001. Am J Epidemiol. 2005;162(2):171–82. Anselin L. Local indicators of spatial association—LISA. Geogr Anal. 1995;27(2):93–115. Anselin L, Getis A. Spatial statistical analysis and geographic information systems. Ann Reg Sci. 1992;26(1):19–33. Ainsworth JW. Why does it take a village? The mediation of neighborhood effects on educational achievement. Soc Forces. 2002;81(1):117–52. https://doi.org/10.1353/sof.2002.0038. Wilson WJ. When work disappears: The world of the new urban poor: vintage, 2011. Rege M, Telle K, Votruba M. Social interaction effects in disability pension participation: evidence from plant downsizing*. Scand J Econ. 2012;114(4):1208–39. Todd E. The causes of progress. Culture, authority and change. Oxford: Blackwell Pub; 1987. Rokkan S. Geography, religion and social class: crosscutting cleavages in Norwegian politics. In: Lipset SM, Rokkan S, editors. Party systems and voter alignments: cross-national perspectives. New York: Free press; 1967. Norwegian Institute of Public Health. Oslo: Public health report: Life expectancy in Norway; 2016. https://www.fhi.no/en/op/hin/befolkning-og-levealder/levealderen-i-norge/ Markussen E, Lødding B, Holen S. De' hær e'kke nokka for mæ [in Norwegian]. Oslo: The Nordic Institute for Studies in Innovation, Research and Education (NIFU); 2012. De Ridder KA, Pape K, Johnsen R, Holmen TL, Westin S, Bjorngaard JH. Adolescent health and high school dropout: a prospective cohort study of 9000 Norwegian adolescents (the young-HUNT). PLoS One. 2013;8(9):e74954. https://doi.org/10.1371/journal.pone.0074954. Shouls S, Congdon P, Curtis S. Modelling inequality in reported long term illness in the UK: combining individual and area characteristics. J Epidemiol Community Health. 1996;50(3):366–76. Mishra S, Carleton RN. Subjective relative deprivation is associated with poorer physical and mental health. Soc Sci Med. 2015;147:144–9. Kawachi I, Kennedy BP, Lochner K, Prothrow-Stith D. Social capital, income inequality, and mortality. Am J Public Health. 1997;87(9):1491–8. Wilkinson RG. Health, hierarchy, and social anxiety. Ann N Y Acad Sci. 1999;896(1):48–63. Wilkinson RG. Income inequality, social cohesion, and health: clarifying the theory—a reply to Muntaner and lynch. Int J Health Serv. 1999;29(3):525–43. Dahl E, Elstad JI. Recent changes in social structure and health inequalities in Norway. Scand J Public Health Suppl. 2001;55:7–17. Kann IC, Kristoffersen P. Arbeidsavklaringspenger – et venterom for uførepensjon? [In Norwegian]. Arbeid og velferd: Norwegian Labour Welfare Administration (NAV). 2014;2:101–15. Kaplan RM, Chambers DA, Glasgow RE. Big data and large sample size: a cautionary note on the potential for Bias. Clin Transl Sci. 2014;7(4):342–6. https://doi.org/10.1111/cts.12178. Lindström M, Merlo J, Östergren P-O. Social capital and sense of insecurity in the neighbourhood: a population-based multilevel analysis in Malmö, Sweden. Soc Sci Med. 2003;56(5):1111–20. Merlo J, Ohlsson H, Lynch KF, Chaix B, Subramanian S. Individual and collective bodies: using measures of variance and association in contextual epidemiology. J Epidemiol Community Health. 2009;63(12):1043–8. Oakes JM. The (mis) estimation of neighborhood effects: causal inference for a practicable social epidemiology. Soc Sci Med. 2004;58(10):1929–52. Messer LC, Oakes JM, Mason S. Effects of socioeconomic and racial residential segregation on preterm birth: a cautionary tale of structural confounding. Am J Epidemiol. 2010;171(6):664–73. Norman P, Boyle P, Rees P. Selective migration, health and deprivation: a longitudinal analysis. Soc Sci Med. 2005;60(12):2755–71. AM is funded by the Research Council of Norway and Frisknett AS (Industrial PhD Scheme, Grant number: 208173/O30). Due to the legislation governing scientific ethics, the data that support the findings of this study are only available on request in accordance with the agreement with the owner of the data, Statistic Norway, and the approver of the study, the Regional Committees for Medical and Health Research Ethics (REC) in Mid-Norway. Please see http://www.ssb.no/en/omssb/tjenester-og-verktoy/data-til-forskning for the procedure and requirements to obtain microdata from Statistic Norway. Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway Arnhild Myhr & Monica Lillefjell Faculty of Nursing and Health Sciences, Nord University, Levanger, Norway Tommy Haugan Department of Health Research, SINTEF Technology and Society, Trondheim, Norway Search for Arnhild Myhr in: Search for Tommy Haugan in: Search for Monica Lillefjell in: Search for Thomas Halvorsen in: AM, TH and TH designed and planned the study. AM and TH (i.e. T. Halvorsen) structured and analysed the data, and AM, ML, TH and TH interpreted the data and wrote the manuscript. All authors take responsibility for the integrity and accuracy of the data analysis and the decision to submit this paper for publication. All authors read and approved the final manuscript. Correspondence to Arnhild Myhr. The present study is based on retrospective analysis of registry data. The Regional Committees for Medical and Health Research Ethics (REK) of Mid-Norway approved the study and the data linkage procedures (permission 2011/783). The ethic committee REK formally waived the need for consent. The exemptions were given because our study used data registries where the information was collected from sources other than the persons themselves. Community characteristics Health behavior, health promotion and society'
CommonCrawl
Theoretical Analysis of Saline Diffusion during Sodium Chloride Aqueous Solutions Freezing for Desalination Purposes Beatriz Castillo Téllez, Karim Allaf, Isaac Pilatowsky Figueroa, Rosenberg Javier Romero Domínguez, Roberto Best y Brown, Wilfrido Rivera Gomez Franco Subject: Earth Sciences, Environmental Sciences Keywords: freezing/melting desalination process; aqueous solutions of sodium chloride; theoretical diffusive models Considering the important demand of fresh water and its scarce availability, water desalination is an interesting technology, producing about 44 Mm3/year worldwide, but, in general the most common desalination techniques are highly energy demanding. Freezing-Melting (F/M) desalination uses just up to 70% less thermal energy, but is the least used process mainly, due to the difficulty of the salt separation. This study proposes a model able to analyses the thermodynamic potential that allows the salt diffusion during the F/M process, using an aqueous solution of sodium chloride. This should allow to obtain a sensitive analysis of the process to promote the separation between the high concentration brine and the ice with liquid separation by physical process. The unidimensional model is based on the evolution of both processes: thermal and mass diffusions, depending on temperature and saline gradients, predicting whether the salt will remain inside the ice or not. Thus, the thermal potential is adjusted to frozen only when the salt has been "pushed" towards the brine. Mostly models have base their results on the assumption of a "certain value of saline concentration of the liquid fraction", value in which there is great disagreement. In this paper the calculations are based on the concentration in the solid-liquid interface, which has been extensively studied and there is a coincidence in those results, being the main advantage of the proposed model. Experimental Investigation of Freezing and Melting Characteristics Of Graphene Based Phase Change Nanocomposite for Cold Thermal Energy Storage Applications Shaji Sidney, Mohan Lal D, Selvam C, Sivasankaran Harish Subject: Engineering, Energy & Fuel Technology Keywords: nanocomposite; melting; freezing; graphene; thermal conductivity In the present work freezing and melting characteristics of water seeded with chemically functionalized graphene nano-platelets in a vertical cylindrical capsule was experimentally studied. The volume percentage of functionalized graphene nano-platelets was varied from 0.1% to 0.5% with an interval of 0.1%. The stability of the synthesised samples were carried out by zeta potential distribution. The thermal conductivity of the nanocomposite samples were experimentally measured using transient hot wire method. A maximum enhancement of ~24% in the thermal conductivity was observed for the 0.5% volume percentage in the liquid state while a ~53% enhancement in the solid state. Freezing and melting behaviour of water dispersed with graphene nanoplatelets were carried out using a cylindrical stainless steel capsule in a constant temperature bath. The bath temperatures considered for studying freezing characteristics were considered to be −6 °C and −10 °C, while to study the melting characteristics the bath temperature was set as 31 °C and 36 °C. The freezing and melting time decreased for all the test conditions when the volume percentage of GnP increased. The freezing rate was enhanced by ~ 43% and ~32% for the bath temperatures of −6 °C and −10 °C respectively at 0.5 vol % of graphene loading. The melting rate was enhanced by ~42% and ~63% for the bath temperature of 31 °C and 36 °C respectively at 0.5 vol % of graphene loading. Image Generation from STL Models and its Potential Role in Layer-Quality Evaluation for the Electron Beam Melting Process Hay Wong, Peter Fox, Derek Neary, Eric Jones Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Additive Manufacturing; Electron Beam Melting; Electronic Imaging; Image Generation; STL Model Electron Beam Melting (EBM) is an increasingly used Additive Manufacturing (AM) technique employed by many industrial sectors, including the medical device and aerospace industries. In EBM process monitoring, data analysis for processed layer quality evaluation is currently focused on the extraction of information from the raw data collected in-EBM process, i.e. thermal/ optical / electronic images, and the comparison between the collected data and the Computed Tomography (CT)/ microscopy images generated post-EBM process. This article postulates that a stack of bitmaps could be generated from the 3D model at a range of Z heights during file preparation of the EBM process, and serve as a reference image set. In-EBM process comparison between that and the workpiece images collected during the EBM process could then be used for quality assessment purposes. In addition, despite the extensive literature on 3D model slicing and contour generation for AM process preparation, no methods regarding image generation from cross sections of the 3D models have been disseminated in details. This article aims to address this by presenting a piece of 3D model-image generation software. The software is capable of generating binary 3D model reference images with user-defined Region-of-Interest (ROI) of the processing area, and Z heights of the model. It is envisaged that this 3D model-reference image generation ability opens up new opportunities in quality assessment for the in-process monitoring of the EBM process. Pilot Feedback Electronic Imaging at Elevated Temperatures and its Potential for In-Process Electron Beam Melting Monitoring Hay Wong, Derek Neary, Eric Jones, Peter Fox, Chris Sutcliffe Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Additive Manufacturing; Electron Beam Melting; In-Process Monitoring; Quality Control; Electronic Imaging Electron Beam Melting (EBM) is an increasingly used Additive Manufacturing (AM) technique employed by many industrial sectors, including the medical device and aerospace industries. The application of this technology is, however, challenged by the lack of process monitoring and control system that underpins process repeatability and part quality reproducibility. An electronic imaging system prototype has been developed to serve as an EBM monitoring technique, the capabilities of which have been verified at room temperature and at 320+10°C. Nevertheless, in order to fully assess the applicability of this technique, the image quality needs to be investigated at a range of elevated temperatures to fully understand the influence of thermal noise due to heat. In this paper, electronic imaging pilot trials at elevated temperatures, ranging from room temperature to , were carried out. Image quality measure Q of the digital electron images was evaluated, and the influence of temperature was investigated. In this study, raw electronic images generated at higher temperatures had greater Q values, i.e. better global image quality. It has been demonstrated that, for temperatures between , the influence of temperature on electronic image quality was not adversely affecting the visual clarity of image features. It is envisaged that the prototype has significant potential to contribute to in-process EBM monitoring in many manufacturing sectors. Pilot Attempt to Benchmark Spatial Resolution of an Electronic Imaging System Prototype for In-Process Electron Beam Melting Monitoring Subject: Engineering, Industrial & Manufacturing Engineering Keywords: electron beam melting; in-process monitoring; quality control; electronic imaging; spatial resolution Electron Beam Melting (EBM) is an increasingly used Additive Manufacturing (AM) technique employed by many industrial sectors, including the medical device and aerospace industries. In-process EBM monitoring for quality assurance purposes has been a popular research area. Electronic imaging has recently been investigated as one of the in-process EBM data collection methods, alongside thermal/ optical imaging techniques. Despite certain capabilities of an electronic imaging system have been investigated, experiments are yet to be carried out to benchmark one of the most important features of any imaging systems – spatial resolution. This article addresses this knowledge gap by: (1) proposing an indicator for the estimation of spatial resolution which includes the Backscattered Electrons (BSE) information depth, (2) estimating the achievable spatial resolution when electronic imaging is carried out inside an Arcam A1 EBM machine, and (3) presenting an experimental method to conduct edge resolution evaluation with the EBM machine. Analyses of experimental results indicated that the spatial resolution was of the order of 0.3mm-0.4mm when electronic imaging was carried out at room temperature. It is believed that by disseminating an analysis and experimental method to estimate and quantify spatial resolution, this study has contributed to the on-going quality assessment research in the field of in-process monitoring of the EBM process. Effect of Process Parameter Modification and Hot Isostatic Pressing on the Mechanical Properties of Selectively Laser-Melted 316L Steel Janusz Kluczynski, Lucjan Śnieżek, Krzysztof Grzelak, Artur Oziębło, Krzysztof Perkowski, Janusz Torzewski, Ireneusz Szachogłuchowicz, Krzysztof Gocman, Marcin Wachowski, Bogusz Kania Subject: Engineering, Mechanical Engineering Keywords: 316L austenitic steel; selective laser melting; hot isostatic pressing; microscopic investigation; residual stresses Industries that rely on additive manufacturing of metallic elements, especially biomedical companies, require material science-based knowledge of how process parameters and methods affect element properties, but such phenomena are incompletely understood. In this study, we investigated the influence of selective laser melting (SLM) process parameters and additional heat treatment on mechanical properties. The research included structural analysis of residual stress, microstructure, and scleronomic hardness in low-depth measurements. Tensile tests with element deformation analysis using digital image correlation (DIC) were performed as well. Experiment results showed it was possible to observe the porosity growth mechanism and its influence on material strength. Elements manufactured with 20% lower energy density had almost half the elongation, which was directly connected with porosity growth during energy density reduction. Hot isostatic pressing (HIP) treatment allowed for a significant reduction of porosity and helped achieve properties similar to elements manufactured using different levels of energy density. Effect of SLM Process Parameters on the Quality of Al Alloy Parts; Part II: Microstructure and Mechanical Properties Ahmed H. Maamoun, Yi F. Xue, Mohamed A. Elbestawi, Stephen C. Veldhuis Subject: Materials Science, General Materials Science Keywords: additive manufacturing; selective laser melting; AlSi10Mg; Al6061; SLM process parameters; quality of the AM parts Additive manufacturing (AM) provides customization of the microstructure and mechanical properties of components. Selective laser melting (SLM) is the commonly used technique for processing high strength Aluminum alloys. Selection of SLM process parameters could control the microstructure of fabricated parts and their mechanical properties. However, process parameter limits and defects inside the as-built parts present obstacles to customized part production. This study is the second part of a comprehensive work that investigates the influence of SLM process parameters on the quality of as-built Al6061 and AlSi10Mg parts. The microstructure of both materials was characterized for different parts processed over a wide range of SLM process parameters. The optimized SLM parameters were investigated to eliminate the internal microstructure defects. Mechanical properties of the parts were illustrated by regression models generated with design of experiment (DOE) analysis. The results reported in this study were compared to previous studies, illustrating how the process parameters and powder characteristics could affect the quality of produced parts. On the Effect of Selective Laser Melting Process Parameters on the Microstructure and Mechanical Properties of Al Alloys Ahmed Maamoun, Yi Xue, Mohamed Elbestawi, Stephen Veldhuis Subject: Materials Science, General Materials Science Keywords: Additive Manufacturing; Selective Laser Melting; AlSi10Mg; Al6061; SLM process parameters; quality of the as-built parts Additive manufacturing (AM) offers customization of microstructure and mechanical properties of fabricated components according to the material selected, and process parameters applied. Selective laser melting (SLM) is the commonly used technique for processing high strength aluminum alloys. Selection of SLM process parameters could control the microstructure of parts and their mechanical properties. However, the process parameters limit and defects obtained inside the as-built parts present obstacles to customized part production. This study investigates the influence of SLM process parameters on the quality of as-built Al6061 and AlSi10Mg parts according to the mutual connection between the microstructure characteristics and mechanical properties. The microstructure of both materials was characterized for different parts processed over a wide range of SLM process parameters. The optimized SLM parameters were investigated to eliminate the internal microstructure defects. The behaviour of mechanical properties of parts was presented through regression models generated from the design of experiment (DOE) analysis for the results of hardness, ultimate tensile strength, and yield strength. A comparison between the results obtained and that reported in the literature is presented to illustrate the influence of process parameters, build environment, and powder characteristics on the quality of parts produced. The results obtained from this study could help to customize the part's quality by satisfying their design requirements in addition to reducing the as-built defects which in turn reduce the amount of the post-processing needed. Parametric Models and Process Map for Alternating Current Square-Waveform Welding of Heat-Resistant Steel Uttam Kumar Mohany, Yohei Abe, Takahiro Fujimoto, Mitsuyoshi Nakatani, Akikazu Kitagawa, Manabu Tanaka, Tetsuo Suga, Abhay Sharma Subject: Engineering, Industrial & Manufacturing Engineering Keywords: submerged arc; heat resistant steel; square waveform welding; aggregate quality index; bay area; melting efficiency; process; model; process map The demand for efficient processes through a comprehensive understanding and optimization of welding conditions continues to grow in the manufacturing industry. This study involves heat-resistant 2.25 Cr-1 Mo V-groove steel welding using the square-waveform alternating cur-rent. Experiments were conducted to build the relationship between input variables—such as current, frequency, electrode negativity ratio, and welding speed—and process performance, such as penetration, bay area, deposition rate, melting efficiency, percentage dilution, flux–wire ratio, and heat input. The process was analyzed in light of the defect-free high-deposition weld groove weld, the sensitivity to process parameters, and the optimization and development of the process map. The study proposes an innovative approach to reducing the cost and time of optimizing the one-pass-each-layer V-groove welding process using bead-on-plate welds. Square waveform welding creates a metallurgical notch in the form of a bay at the fusion boundary that can be minimized by selecting appropriate welding conditions. The square waveform submerged arc welding is more sensitive towards changes in current and welding speed than the frequency and electrode negativity ratio; however, the electrode negativity ratio and frequency are minor but helpful parameters to achieve optimal results. The proximity of the planned and experimental results to within 3% confirms the validity of the proposed approach. The investigation shows that 90% of the maximum deposition rate is possible for one-pass-each-layer V-groove welds within heat input and weld width constraints. Effect of SLM Process Parameters on the Quality of Al Alloy Parts; Part I: Powder Characterization, Density, Surface Roughness, and Dimensional Accuracy Subject: Materials Science, General Materials Science Keywords: additive manufacturing; selective laser melting; AlSi10Mg; Al6061; SLM process parameters; powder characterization; density, surface topology; dimensional accuracy Additive manufacturing (AM) of high strength Al alloys promises to enhance the performance of critical components related to various aerospace and automotive applications. The key advantage of AM is its ability to generate lightweight, robust, and complex shapes. However, the characteristics of the as-built parts may represent an obstacle to satisfy the part quality requirements. The current study investigates the influence of selective laser melting (SLM) process parameters on the quality of parts fabricated from different Al alloys. A design of experiment (DOE) is used to analyze relative density, porosity, surface roughness, and dimensional accuracy according to the interaction effect between the SLM process parameters. The results show a range of energy densities and SLM process parameters for the AlSi10Mg and Al6061 alloys needed to achieve "optimum" values for each performance characteristic. A process map is developed for each material by combining the optimized range of SLM process parameters for each characteristic to ensure good quality of the as-built parts. The second part of this study investigates the effect of SLM process parameters on the microstructure and mechanical properties of the same Al alloys. This comprehensive study is also aimed at reducing the amount of post-processing needed. Dissecting the Roles of Cuticular Wax in Plant Resistance to Shoot Dehydration and Low-Temperature Stress in Arabidopsis Tawhidur Rahman, Mingxuan Shao, Shankar Pahari, Prakash Venglat, Raju Soolanayakanahally, Xiao Qiu, Abidur Rahman, Karen Tanino Subject: Life Sciences, Biochemistry Keywords: Cuticular wax; dehydration; low temperature; freezing, stress avoidance; alkane Cuticular waxes are a mixture of hydrophobic very-long-chain fatty acids and their derivatives accumulated in the plant cuticle. Most studies define the role of cuticular wax largely based on reducing non-stomatal water loss. The present study investigated the role of cuticular wax in reducing both low-temperature and dehydration stress in plants using Arabidopsis thaliana mutants and transgenic genotypes altered in the formation of cuticular wax. cer3-6, a known Arabidopsis wax-deficient mutant (with distinct reduction in aldehydes, n-alkanes, secondary n-alcohols, and ketones compared to wild type (WT)), was most sensitive to water loss; while dewax, a known wax overproducer (greater alkanes and ketones compared to WT), was more resistant to dehydration compared to WT. Furthermore, cold-acclimated cer3-6 froze at warmer temperatures, while cold-acclimated dewax displayed freezing exotherms at colder temperatures compared to WT. GC-MS analysis identified a characteristic decrease in the accumulation of certain waxes (e.g. alkanes, alcohols) in Arabidopsis cuticles under cold acclimation, which was additionally reduced in cer3-6. Conversely, the dewax mutant showed a greater ability to accumulate waxes under cold acclimation. FTIR spectroscopy also supported observations in cuticular wax deposition under cold acclimation. Our data indicate cuticular alkane waxes along with alcohols and fatty acids can facilitate avoidance of both ice formation and leaf water loss under dehydration stress, and are promising genetic targets of interest. Macro Photography as an Alternative to the Stereoscopic Microscope in the Standard Test Method for Microscopical Characterisation of the air-void System in Hardened Concrete: Equipment and Methodology Fernando Suárez, José J. Conchillo, Jaime C. Gálvez, María J. Casati Subject: Engineering, Civil Engineering Keywords: Freezing and thawing; Concrete; Durability; Spacing factor; Pore characterisation The determination of the parameters that characterise the air-void system in hardened concrete elements becomes crucial for structures under freezing and thawing cycles. The ASTM C457 standard describes some procedures to accomplish this task, but they are not easy to apply, require specialised equipment such as a stereoscopic microscope and result in highly tedious tasks to be performed. This paper describes an alternative procedure to the modified point-count method described in the Standard that makes use of macro photography. This alternative procedure is successfully applied to a large set of samples and presents some advantages over the traditional method, since the required equipment is less expensive and provides a more comfortable and less tedious procedure for the operator. Gait Parameters Measured From Wearable Sensors Reliably Detect Freezing of Gait in a Stepping in Place Task Cameron Diep, Johanna O'Day, Yasmine Kehnemouyi, Gary Burnett, Helen Bronte-Stewart Subject: Medicine & Pharmacology, Allergology Keywords: Parkinson's disease; wearables; inertial measurement unit; sensors; freezing of gait Freezing of gait (FOG), a debilitating symptom of Parkinson's disease (PD), can be safely studied using the stepping in place (SIP) task. However, clinical, visual identification of FOG during SIP is subjective and time consuming, and automatic FOG detection during SIP currently requires measuring center of pressure on dual force plates. This study examines whether FOG elicited during SIP in 10 individuals with PD could be reliably detected using kinematic data measured from wearable inertial measurement unit sensors (IMUs). A general, logistic regression model (AUC = 0.81) determined that three gait parameters together were overall the most robust predictors of FOG during SIP: arrhythmicity, swing time coefficient of variation, and swing angular range. Participant-specific models revealed varying sets of gait parameters that best predicted FOG for each participant, highlighting variable FOG behaviors, and demonstrated equal or better performance for 6 out of the 10 participants, suggesting the opportunity for model personalization. The results of this study demonstrated that gait parameters measured from wearable IMUs reliably detected FOG during SIP, and the general and participant-specific gait parameters allude to variable FOG behaviors that could inform more personalized approaches for treatment of FOG and gait impairment in PD. Thermal Regime of a Temperate Deep Lake and Its Response to Climate Change: Lake Kuttara, Japan Kazuhisa A. Chikita, Hideo Oyagi, Tadao Aiyama, Misao Okada, Hideyuki Sakamoto, Toshihisa Itaya Subject: Earth Sciences, Environmental Sciences Keywords: non-freezing; temperate lake; heat budget; heat storage; global warming A temperate deep lake, Lake Kuttara, Hokkaido, Japan (148 m deep at maximum) was completely frozen every winter in the 20th century. However, unfrozen conditions of the lake over winter occurred four times in the 21st century, which is probably due to global warming. In order to understand how thermal regime of the lake responds to climate change, its heat storage change was calculated by estimating heat budget of the lake and monitoring water temperature at the deepest point for September 2012–June 2016. As a result, temporal change of the heat storage from the heat budget was very consistent with that from the direct temperature measurement (determination coefficient R2 = 0.827). The 1978–2017 data at a meteorological station near Kuttara indicated that there are significant (less than 5% level) long-term trends for air temperature (0.024 °C/yr) and wind speed (−0.010 m/s/yr). A sensitivity analysis for the heat storage from the heat budget estimate and an estimate of return periods for mean air temperature in mid-winter allow us to conclude that the lake could be unfrozen once per about two year in a decade. Ice-Crystal Nucleation in Water: Thermodynamic Driving Force and Surface Tension Olaf Hellmuth, Juern W. P. Schmelzer, Rainer Feistel Subject: Physical Sciences, Applied Physics Keywords: CNT; homogeneous freezing; crystallization of ice; surface tension; thermodynamics of undercooled water A recently developed thermodynamic theory for the determination of the driving force of crystallization and the crystal–melt surface tension is applied to the ice–water system employing the new Thermodynamic Equation of Seawater TEOS-10. The deviations of approximative formulations of the driving force and the surface tension from the exact reference properties are quantified, showing that the proposed simplifications are applicable for low to moderate undercooling and pressure differences to the respective equilibrium state of water. The TEOS-10 based predictions of the ice crystallization rate revealed pressure-induced deceleration of ice nucleation with an increasing pressure, and acceleration of ice nucleation by pressure decrease. This result is in, at least, qualitative agreement with laboratory experiments and computer simulations. Both the temperature and pressure dependencies of the ice–water surface tension were found to be in line with the le Chatelier–Braun principle, in that the surface tension decreases upon increasing degree of metastability of water (by decreasing temperature and pressure), which favors nucleation to move the system back to a stable state. The reason for this behavior is discussed. Finally, the Kauzmann temperature of the ice–water system was found to amount TK=116K, which is far below the temperature of homogeneous freezing. The Kauzmann pressure was found to amount pK=-212MPa, suggesting favor of homogeneous freezing upon exerting a negative pressure on the liquid. In terms of thermodynamic properties entering the theory, the reason for the negative Kauzmann pressure is the higher mass density of water in comparison to ice at the melting point. Crystal Growth Techniques for Layered Superconductors Masanori Nagao Subject: Physical Sciences, Condensed Matter Physics Keywords: single crystal growth; incongruent melting compound; flux method Layered superconductors are attractive because some of them show high critical temperatures. While their crystal structures are similar, those compounds are composed of many elements. Compounds with many elements tend to be incongruent melting compounds, thus, their single crystals cannot be grown via the melt-solidification process. Hence, these single crystals have to be grown below the decomposition temperature, and then the flux method, a very powerful tool for the growth of these single crystals with incongruent melting compounds, is used. This review shows the flux method for single-crystal growth technique by self-flux, chloride-based flux, and HPHT (high-pressure and high-temperature) flux method for many-layered superconductors: high-Tc cuprate, Fe-based and BiS2-based compounds. Change Mechanism of Strength of Soil-rock Mixture Freezing-thawing Interface under Different Rock Li Gang, Huang Tao, Li Zhen, Bai Miaomaio Subject: Earth Sciences, Geology Keywords: soil–rock mixture, freezing–thawing interface, shear strength, shear failure surface, particle calculation model With global warming and accelerated degradation of permafrost, the engineering problems caused by the formation of weak zones between the shallow and permafrost layers of soil–rock mixture (S-RM) slopes in permafrost regions have become increasingly prominent. To explore the influence of rock content on the shear strength of the S-RM freezing–thawing interface, the variation in the shear strength for different rock content is studied herein using direct shear tests. In addition, a 3D laser scanner is used for obtaining the topography of the shear failure surface. Combined with the analysis results of the shear band-particle calculation model, the influence of the rock content on the shear strength of the interface is explored. It was found that the impact threshold of the rock content on the interface strength and failure mode is approximately 30%, when the rock content (R) is > 30% and that the shear strength increases rapidly with increasing rock content. When R ≤ 30%, the actual shear plane is similar to waves; when R > 30%, the shear plane appears as gnawing failure. The shear strength of S-RM freezing–thawing interface mainly comes from the bite force and friction between particles. The main reason for the increase in shear strength with increasing rock content is the increase in bite force between particles, which makes the ratio of bite force to friction force approximately 1:1. Simulated and In Situ Frost Heave in Seasonally Frozen Soil from a Cold Temperate Broad-leaved Korean Pine Forest Maosen Lin, Anzhi Wang, Dexin Guan, Changjie Jin, Jiabin Wu, Fenghui Yuan Subject: Earth Sciences, Environmental Sciences Keywords: seasonally frozen soil; frost heave; soil moisture content; soil type; freezing depth; soil porosity Frost heave, which is the volumetric expansion of frozen soil, has great ecological significance, since it creates water storage spaces in soils at the beginning of the growing season in cold temperate forests. To understand the characteristics of frost heave in seasonally frozen soil and the factors that impact its extent, we investigated the frost heave rates of forest soil from different depths and with different soil moisture contents, using both lab-based simulation and in situ measurement in a broadleaved Korean pine forest in the Changbai Mountains (northeastern China). We found that frost heave was mainly affected by soil moisture content, soil type, and gravitational pressure. Frost heave rate increased linearly with soil moisture content, and for each 100% increase in soil moisture content, the frost heave rate increased by 41.6% (loam, upper layer), 17.2% (albic soil, middle layer), and 4.6% (loess, lower layer). Under the same soil moisture content, the frost heave rate of loam was highest, whereas that of loess was lowest, and the frost heave of the uppermost 15 cm, which is the biologically enriched layer, accounted for ~55% of the frost heave. As a result, we determined the empirical relationship between frost heave and freezing depth, which is important for interpreting the effects of frost heave on increases in the storage space of forest soils and for calculating changes in soil porosity. Crack Growth Behavior of Additively Manufactured 316L Steel – Influence of Build Orientation and Heat Treatment Janusz Kluczynski, Lucjan Śnieżek, Krzysztof Grzelak, Janusz Torzewski, Ireneusz Szachogłuchowicz, Marcin Wachowski, Jakub Łuszczek Subject: Engineering, Mechanical Engineering Keywords: additive manufacturing; 316L steel; fatigue cracking; selective laser melting The effects of build orientation and heat treatment on the crack growth behavior of 316L stainless steel (SS) fabricated via a selective laser melting (SLM) additive manufacturing process were investigated. Significant growth of available research results of additively manufactured metallic parts still needs to be improved. The most important issue connected with properties after additive manufacturing is properties high anisotropy, especially from the fatigue point of view. The research included crack growth behavior of additively manufactured 316L in comparison to conventionally made reference material. Both groups of samples were obtained using precipitation heat treatment. Different build orientation in additively manufactured samples and rolling direction in reference samples were taken into account as well. Precipitation heat treatment of additively manufactured parts allowed to reach similar microstructure and tensile properties to elements conventionally made. The heat treatment positively affected fatigue properties. Additionally, precipitation heat treatment of additively manufactured elements significantly affected the reduction of fatigue cracking velocity and changed the fatigue cracking mechanism. Thermophysical Characterization and Numerical Investigation of Three Paraffin Waxes as Latent Heat Storage Materials Manel Kraiem, Mustapha Karkri, Sassi Ben Nasrallah, patrick sobolciak, Magali Fois, Nasser A. Alnuaimi Subject: Engineering, Energy & Fuel Technology Keywords: Phase change material, Paraffin, Melting, Natural convection, Thermal storage Thermophysical characterization of three paraffin waxes (RT27, RT21 and RT35HC) is carried out in this study using DSC, TGA and transient plane source technics. Then, a numerical study of their melting in a rectangular enclosure is examined. The enthalpy-porosity approach is used to formulate this problem in order to understand the heat transfer mechanism during the melting process. The analysis of the solid-liquid interface shape, the temperature field shows that the conduction is the dominant heat transfer mode in the beginning of the melting process. It is followed by a transition regime and the natural convection becomes the dominant heat transfer mode. The effects of the Rayleigh number and the aspect ratio of the enclosure on the melting phenomenon are studied and it is found that the intensity of the natural convection increases as the Rayleigh number is higher and the aspect ratio is smaller. In the second part of the numerical study, a comparison of the performance of paraffins waxes during the melting process is conducted. Results reveals that from a kinetically RT21 is the most performant but in term of heat storage capacity, it was inferred that RT35HC is the most efficient PCM. Heat Source Modeling in Selective Laser Melting Elham Mirkoohi, Daniel E. Seivers, Hamid Garmestani, Steven Y. Liang Subject: Engineering, Mechanical Engineering Keywords: temperature field; additive manufacturing; selective laser melting; heat source modeling Selective laser melting is an emerging Additive Manufacturing (AM) technology for metals. Intricate three-dimensional parts can be generated from the powder bed by selectively melting the desired location of the powders. The process is repeated for each layer until the part is built. The necessary heat is provided by a laser. Temperature magnitude and history during SLM directly determine the molten pool dimensions, thermal stress, residual stress, balling effect, and dimensional accuracy. Laser-matter interaction is a crucial physical phenomenon in the SLM process. In this paper, five different heat source models are introduced to predict the three-dimensional temperature field analytically. These models are known as steady state moving point heat source, transient moving point heat source, semi-elliptical moving heat source, double elliptical moving heat source, and uniform moving heat source. The analytical temperature model for all of the heat source models are solved using three-dimensional differential equation of heat conduction with different approaches. The Steady state and transient moving heat source are solved using separation of variables approach. However, the rest of models are solved by employing the Green's functions. Due to the high magnitude of the temperature in the presence of the laser, the temperature gradient is usually high which has a substantial impact on thermal material properties. Consequently, the temperature field is predicted by considering the temperature sensitivity thermal material properties. Moreover, due to the repeated heating and cooling, the part usually undergoes several melting and solidification cycles, this physical phenomenon is considered by modifying the heat capacity using latent heat of melting. Furthermore, the multi-layer aspect of metal AM process is considered by incorporating the temperature history from the previous layer since the interaction of the layers have an impact on heat transfer mechanisms. The proposed temperature field models based on different heat source approaches are validated using experimental measurement of melt pool geometry from independent experimentations. The detailed explanation of the comparison of models is also provided. Moreover, the effect of process parameters on the balling effect is also discussed. Comparison of Nano-Mechanical Behavior between Selective Laser Melted SKD61 and H13 Tool Steels Jaecheol Yun, Van Luong Nguyen, Jungho Choe, Dong-yeol Yang, Hak-sung Lee, Sangsun Yang, Ji-Hun Yu Subject: Materials Science, Metallurgy Keywords: selective laser melting; SKD61 tool steel; nanoindentation; strain-rate sensitivity Using nanoindentation under various strain rates, the mechanical properties of a selective laser melted (SLM) SKD61 at the 800 mm/s scan speed was investigated and compared to SLM H13. No obvious pile-up due to the ratio of the residual depth (hf) and the maximum depth (hmax) being lower than 0.7 and no cracking were observed on any of the indenter surfaces. The nanoindentation strain-rate sensitivity (m) of SLM SKD61 was found to be 0.034, with hardness increasing from 8.65 GPa to 9.93 GPa as the strain rate increased between 0.002 s−1 and 0.1 s−1. At the same scan speed, the m value of SLM H13 (m = 0.028) was lower than that of SLM SKD61, indicating that the mechanical behavior of SLM SKD61 was more critically affected by the strain rate compared to SLM H13. SLM processing for SKD61therefore shows higher potential for advanced tool design than for H13. Microstructural Study of CrNiCoFeMn High Entropy Alloy Obtained by Selective Laser Melting Enrico Gianfranco Campari, Angelo Casagrande Subject: Materials Science, Metallurgy Keywords: high entropy alloys; selective laser melting; microstructure; mechanical properties; strengthening mechanism The high entropy alloy (HEA) of equiatomic composition CrNiFeCoMn and with FCC crystal structure was additively manufactured with a selective laser melting (SLM) process starting from mechanically alloyed powders. The as produced alloy shows finenitride and  phase pre-cipitates, which are Cr-rich and stable up to about 900 K. The precipitates increase in number and dimensions after long-period annealing at 900-1300 K, with a change in the HEA mechanical properties. Higher aging temperatures in furnace, above 1300 K, turn the alloy in a single FCC structure, with disappearance of the nitrate and  phase precipitates inside the grains and at the grain boundaries, but with still a presence of a finer Cr-rich nitride precipitation phase. These re-sults suggest that the as-produced HEA is a supersatured solid solution at low and intermediate temperature with nitrides and  nanostructures. Numerical Simulation and Experimental Study on Selective la-Ser Melting of 18Ni-300 Maraging Steel liang yan, biao yan Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: selective laser melting; maraging steel, laser power; temperature field; numerical simulation The energy transfer process of laser selective melting is very complex. To study the effect of la-ser selective melting on the microstructure and properties of 18Ni-300 martensitic steel, ABAQUS was used to simulate the temperature of laser cladding 18Ni-300 martensitic steel at different time points and different laser power. The results show that the cross-section shape of the molten pool changes from round to oval With the increase of laser power, the higher the peak value of temperature time curve, the greater the temperature gradient; and the laser clad-ding experiment of 18Ni-300 martensitic steel was carried out, and the microstructure and me-chanical properties of the samples under different laser power were analyzed. The results show that with the increase of laser power, the grain size of the cladding layer becomes smaller and the microstructure becomes more compact; the hardness of the side surface of the sample is higher than that of the upper surface, and the tensile strength and elongation show a trend of first increasing and then decreasing. The Influence of Heat Treatment Temperature on Microstructures and Mechanical Properties of Titanium Alloy Fabricated by Laser Melting Deposition Wei Wang, Xiaowen Xu, Ruixin Ma, Guojian Xu, Weijun Liu Subject: Engineering, Other Keywords: Laser melting deposition; TC4 titanium alloy; Normalizing temperature; Microstructure; Mechanical properties Ti-6Al-4V (TC4) titanium alloy parts were successfully fabricated by laser melting deposition (LMD)technology in this study. Proper normalizing temperatures were presented in detailed for bulk LMD specimens. Optical microscope, scanning electron microscopy, X-ray diffraction and electronic universal testing machine were used to characterize the microstructures, phase compositions, the tensile properties and hardness of the TC4 alloy parts treated using different normalizing temperature. The experimental results showed that the as-fabricated LMD speceimens microstructures mainly consisted of α-Ti phase with a small amount of β-Ti phase. After normalizing treatment, in the area of α-Ti phase, the recrystallized length and width of α-Ti phase both increased. When normalizing in the (α+β) phase field, the elongated primary α-Ti phase in the as-deposited state was truncated due to the precipitation of β-Ti phase and became a short rod-like primary α-Ti phase. In as-fabricated microstructure, the β-Ti phase was precipitated between different short rod-shaped α-Ti phases distributed as basketweave. After normalizing treatment at 990 for two hours with subsequent air cooling, the TC4 titanium alloy had significant different microstructures from original sample produced by LMD. Moreover, the mismatch of tensile and hardness property was mitigated in this heat treatment. So the normalizing treatment methods and temperature can be qualified as a prospective heat treatment of titanium alloy fabricating by laser melting deposition. On Some Properties of the Glacial Isostatic Adjustment Fingerprints Giorgio Spada, Daniele Melini Subject: Earth Sciences, Geophysics Keywords: glacial isostatic adjustment; sea level change; fingerprints of past ice melting Along with density and mass variations of the oceans driven by global warming, Glacial Isostatic Adjustment (GIA) in response to the last deglaciation still contributes significantly to present-day sea-level change. Indeed, in order to reveal the impacts of climate change, long term observations at tide gauges and recent absolute altimetry data need to be decontaminated from the effects of GIA. This is now realized by means of global models constrained by the observed evolution of the paleo-shorelines since the Last Glacial Maximum, which account for the complex interactions between the solid Earth, the cryosphere and the oceans. In the recent literature, past and present-day effects of GIA are often expressed in terms of fingerprints describing the spatial variations of several geodetic quantities like crustal deformation, the harmonic components of the Earth's gravity field, relative and absolute sea level. However, since it is driven by the sluggish readjustment occurring within the viscous mantle, GIA shall taint the pattern of sea-level variability also during the forthcoming centuries. The shapes of the GIA fingerprints reflect inextricable deformational, gravitational, and rotational interactions occurring within the Earth system. Using up-to-date numerical modeling tools, our purpose is to revisit and to explore some of the physical and geometrical features of the fingerprints, their symmetries and intercorrelations, also illustrating how they stem from the fundamental equation that governs GIA, i.e., the Sea Level Equation. Preparation of Ti-46Al-8Nb Alloy Ingots beyond Laboratory Scale Based on BaZrO3 Refractory Crucible Baohua Duan, Lu Mao, Yuchen Yang, Qisheng Feng, Xuexian Zhang, Haitao Li, Lina Jiao, Rulin Zhang, Xionggang Lu, Guangyao Chen, Chonghe Li Subject: Materials Science, Metallurgy Keywords: Ti-46Al-8Nb alloy; vacuum induction melting; BaZrO3 refractory; microsegregation; mechanical properties The high Nb-containing TiAl-based alloy ingot beyond laboratory scale with a composition of Ti-46Al-8Nb (at. %) was prepared by a vacuum induction melting process based on BaZrO3 refractory. An ingot without macroscopic casting defects (such as pipe shrinkage and center line porosity) was finally obtained, and the chemical composition, solidification path, microstructure and tensile properties of the ingots were investigated. The results show that the deviations of Al and Nb content along a 430 mm long central part of the ingot are approximately \mathrm{\pm}0.39 at. % and \mathrm{\pm}0.14 at. %, and the oxygen content in the ingot can be controlled at around 1000 ppm. During the solidification process, the alloy suffered from peritectic reaction and formed columnar grains with anisotropy. In addition to Al segregation and Nb segregation, β-phase particles associated with γ phase at the triple junction of the colonies were observed. Moreover, the mechanical properties of the ingot in the transverse direction are significantly better than those in the longitudinal direction, with a tensile strength of up to high as 700 MPa and a corresponding fracture elongation of 1.1 %. The Effect of Zr Addition on Melting Temperature, Microstructure, Recrystallization and Mechanical Properties of a Cantor High Entropy Alloy Enrico Gianfranco Campari, Angelo Casagrande, E. Colombini, Magdalena Gualtieri, Paolo Veronesi Subject: Materials Science, Biomaterials Keywords: high entropy alloy; microstructure; vacuum induction melting; heat treatment; mechanical spectroscopy; Zirconium The effect of Zr addition on the melting temperature of the CoCrFeMnNi High Entropy Alloy (HEA), known as the "Cantor's Alloy", is investigated, together with its microstructure, mechan-ical properties and thermo-mechanical recrystallization process. The base and Zr-modified al-loys are obtained by vacuum induction melting of mechanically pre-alloyed powders followed by recrystallization. The alloys were characterized by X-ray diffraction, scanning and transmis-sion electron microscopy, thermal analyses, mechanical spectroscopy and indentation measures. The main advantages of Zr addition are: 1) a fast vacuum induction melting (VIM) process; 2) the lower melting temperature, due to Zr eutectics formation with all the Cantor's alloy elements; 3) the good chemical alloy homogeneity; 4) the mechanical properties improvement of recrystallized grains with a coherent structure. The crystallographic lattice of both alloys resulted to be FCC. Results demonstrate that the Zr-modified HEA presents a higher recrystallization temperature and smaller grain size after recrystallization with respect to the Cantor's alloy, with precipita-tion of a coherent second phase which enhance the alloy hardness and strength, while maintaining a good tensile ductility. Microstructure and Properties of SLM High Speed Steel Olsson Elin, Sundin Stefan, Ma Taoran, Proper Sebastian, Lyphout Christophe, André Johanna Subject: Engineering, Automotive Engineering Keywords: AM; selective laser melting; metal powder; high speed steel; microstructure and hardness Selective laser melting (SLM) is a commonly used laser powder bed technique where the final properties are influenced by many different powder related properties, such as particle size distribution, chemical composition and flowability. In applications where high hardness, wear resistance, strength and good heat properties are required, high speed steels (HSS) are widely used today. HSS has high carbon content and are therefore considered as unweldable. The rapidly growing implementation of AM technologies has led to a growing range of new applications and demands for new alloys and properties. The interest in being able to manufacture HSS by SLM without cracking is therefore increasing. In SLM, it is possible to preheat the base plate to a few hundred degrees Celsius which has been used for HSS and proved successful due to reduced thermal gradients. In this study, the properties of SLM produced high speed steel PEARL Micro®2012 with a carbon content of 0.61 wt.-% have been investigated and compared to those of a forged and rolled PM-HIP counterpart ASP®2012. Influence of Successive Chemicals and Thermochemical Treatments on Surface Features of Ti6Al4V Samples Manufactured by SLM Jesús E. González, Gabriela de Armas, Jeidy Negrin, Ana M. Beltrán, Paloma Trueba, Francisco J. Gotor, Eduardo Peón, Yadir Torres Subject: Materials Science, Biomaterials Keywords: selective laser melting; Ti6Al4V; acid etching; chemical oxidation; thermochemical treatment; surface features. Ti6Al4V samples obtained by selective laser melting were subjected to acid treatment, chemical oxidation in hydrogen peroxide solution and subsequent thermochemical treatment. The effect of temperature and time of acid etching of Ti6Al4V samples on surface roughness, morphology, topography and chemical and phase composition after the thermochemical treatment was studied. The surfaces were characterized using scanning electron microscopy, energy dispersive X-ray spectroscopy, X-ray diffraction and contact profilometry. Pore and protrusion sizes were measured. Acid etching modified the elemental composition and surface roughness of the alloy. Temperature had a greater influence on the morphology, topography and surface roughness of samples than time. Increases in roughness values were observed when applying successive chemical oxidation and thermochemical treatment compared to the values observed on surfaces with acid etching. After the thermochemical treatment, the samples with acid etching at a temperature of 80 °C showed a multiscale topography. In addition, a network-shaped structure was obtained on all surfaces, both on their protrusions and pores previously formed during the acid etching. Effects of Scan Strategy on Thermal Properties and Temperature Field in Selective Laser Melting Elham Mirkoohi, Daniel E. Sievers, Hamid Garmestani, Steven Y. Liang Subject: Engineering, Mechanical Engineering Keywords: selective laser melting; temperature modeling; melt pool geometry; hatching space; time delay Temperature field is an essential attribute of metal additive manufacturing in view of its bearings on the prediction, control, and optimization of residual stress, part distortion, fatigue, balling effect, etc. This work provides an analytical physics-based approach to investigate the effect of scan strategy parameters including time delay between two irradiations and hatching space on thermal material properties and melt pool geometry. This approach is performed through the analysis of the distribution of material properties and temperature profile in three-dimensional space. The moving point heat source approach is used to predict the temperature field. To predict the temperature field during the additive manufacturing process some important phenomena are considered. 1) Due to the high magnitude of temperature in the presence of the laser, the temperature gradient is usually high which has a crucial influence on thermal material properties. Consequently, the thermal material properties of stainless steel grade 316L are considered to be temperature-dependent. 2) Due to the repeated heating and cooling, part usually undergoes several melting and solidification cycles. This physical phenomenon is considered by modifying the heat capacity using the latent heat of melting. 3) The multi-layer aspect of metal AM process is considered by incorporating the temperature history from the previous layer since the interaction of the successive layers has an impact on heat transfer mechanisms. 4) Effect of heat affected zone on thermal material properties is considered by the superposition of material properties in regions where the temperature fields of two consecutive irradiations have an overlap since the consecutive irradiations change the behavior of the material properties. The goals are to 1) investigate the effects of temperature-sensitive material properties and constant material properties on the temperature field. 2) Study the behavior of thermal material properties under different scan strategies. 3) Study the importance of considering the effect of heat affected zone on thermal material through the prediction of melt pool geometry. 4) Investigate the effect of hatching space on melt pool geometry. This work is purely employed physics-based analytical models to predict the behavior of material properties and temperature field under different process conditions, and no finite element modeling is used. Effects of Heat Treatment Temperature on Microstructure and Mechanical Properties of M2 High-Speed Steel Selective Laser Melting Samples Huan Ding, Xiong Xiang, Rutie Liu, Jie Xu Subject: Materials Science, General Materials Science Keywords: high-speed steel (HSS); selective laser melting (SLM); annealing; microstructure; hardness; flexural strength At different heat treatment temperatures, the hardness and flexural strength of M2 high-speed steel selective laser melting (SLM) parts show mixed trends. When the heat treatment temperature is 260°C, the hardness and flexural strength of the M2 high-speed steel SLM part are decreased, but the hardness difference between the upper and lower surfaces of the M2 high-speed steel SLM part is also reduced. When the heat treatment temperature is 560°C, the hardness and flexural strength of the M2 high-speed steel SLM part are almost close to that of the original M2 high-speed steel SLM part, and the performance gradient in the sample is improved, and the overall structure is uniform. When the subsequent heat treatment temperature is 860 °C, the hardness of the SLM parts reaches a minimum, with an average value of 24 HRC. However, the flexural strength exceeds that of the original SLM parts. Moreover, the microstructure of the sample is uniform, which significantly improves the anisotropy of performance. Collaborative Optimization on Density and Surface Roughness of 316L Stainless Steel in Selective Laser Melting Yong Deng, Zhongfa Mao, Nan Yang, Xiaodong Niu, Xiangdong Lu Subject: Engineering, Industrial & Manufacturing Engineering Keywords: selective laser melting; 316L stainless steel; multi-objective optimization; relative density; surface roughness Online: 16 February 2020 (15:52:05 CET) Although the concept of additive manufacturing has been proposed for several decades, momentum of selective laser melting (SLM) is finally starting to build. In SLM, density and surface roughness, as the important quality indexes of SLMed parts, are dependent on the processing parameters. However, there are few studies on their collaborative optimization in SLM to obtain high relative density and low surface roughness simultaneously in the previous literature. In this work, the response surface method was adopted to study the influences of different processing parameters (laser power, scanning speed and hatch space) on density and surface roughness of 316L stainless steel parts fabricated by SLM. The statistical relationship model between processing parameters and manufacturing quality is established. A multi-objective collaborative optimization strategy considering both density and surface roughness is proposed. The experimental results show that the main effects of processing parameters on the density and surface roughness are similar. It is noted that the effects of the laser power and scanning speed on the above objective quality show highly significant, while hatch space behaves an insignificant impact. Based on the above optimization, 316L stainless steel parts with excellent surface roughness and relative density can be obtained by SLM with optimized processing parameters. Construction of Cellular Substructure in Selective Laser Melting Yafei Wang, Chenglu Zhang, Chenfan Yu, Leilei Xing, Kailun Li, Jinhan Chen, Jing Ma, Wei Liu, Zhijian Shen Subject: Materials Science, Metallurgy Keywords: selective laser melting; substructure; model; growth direction; crystallographic orientation; cell; cell-like dendrite. Cellular substructure has been widely observed in the sample fabricated by selective laser melting, while its growth direction and the crystallographic orientation have seldom been studied. This research tries to build a general model to construct the substructure from its two dimensional morphology. All the three Bunge Euler angles to specify a unique growth direction are determined, and the crystallographic orientation corresponding to the growth direction is also obtained. Based on the crystallographic orientation, the substructure in the single track is distinguished between cell-like dendrite and cell. It is found that, with the increase of scanning velocity, the substructure transits from cell-like dendrite to cell. The critical growth rate of the transition can be around 0.31 ms-1. Additive Manufacturing of Cobalt-Based Dental Alloys: Analysis of Microstructure and Physico-Mechanical Properties Leonhard Hitzler , Frank Alifui-Segbaya, Philipp Williams, Burkhard Heine, Michael Heitzmann, Wayne Hall, Markus Merkel, Andreas Öchsner Subject: Keywords: Cobalt-chromium alloy; Additive manufacturing; Selective laser melting; Microstructure; Tensile properties; Heat-treatment The limitations of investment casting of cobalt-based alloys are claimed to be less problematic with significant improvements in metal additive manufacturing by selective laser melting (SLM). Despite these advantages, the metallic devices are likely to display mechanical anisotropy in relation to build orientations, which could consequently affect their performance 'in vivo'. In addition, there are inconclusive evidence concerning the requisite composition and post-processing steps (e.g. heat-treatment to relieve stress) that must be completed prior to the devices being used. In the current paper, we evaluate the microstructure of ternary cobalt-chromium-molybdenum (Co-Cr-Mo) and cobalt-chromium-tungsten (Co-Cr-W) alloys built with Direct Metal Printing and LaserCUSING SLM systems respectively at 0°, 30°, 60° and 90° inclinations (Φ) in as-built (AB) and heat-treated (HT) conditions. The study also examines the tensile properties (Young's modulus, E; yield strength, RP0.2; elongation at failure, At and ultimate tensile strength, Rm), relative density (RD), and micro-hardness (HV5) and macro-hardness (HV20) as relevant physico-mechanical properties of the alloys. Data obtained indicate improved tensile properties and HV values after short and cost-effective heat-treatment cycle of Co-Cr-Mo alloy; however, the process did not homogenize the microstructure of the alloy. Annealing heat-treatment of Co-Cr-W led to significant isotropic characteristics with increased E and At (except for Φ = 90º) in contrast to decreased RP0.2, Rm and HV values, compared to the AB form. Similarly, the interlaced weld-bead structures in AB Co-Cr-W were removed during heat-treatment, which led to a complete recrystallization in the microstructure. Both alloys exhibited defect-free microstructures with RD exceeding 99.5%. Review: Molecular Diagnosis of Hepatitis C Viruses; Technologies and Its Clinical Applications Muhammad Ammar Athar, Vakil Ahmad, Inaam Ullah, Samiullah Malik, Shaogui Wan Subject: Life Sciences, Virology Keywords: Hepatitis C virus, Genotyping, Mixed infection, Fluorescence melting curve analysis, Viral Load, Quantification Hepatitis-C is one of the most common viral diseases caused by hepatitis C virus (HCV). It is responsible for millions of deaths each year in the developing world. The common dissemination paths of HCV include the use of contaminated water and transfusion of infected blood. Control of this virus has become a challenge for scientists and health professionals due to its versatility and adaptability in different host environments. Along with other problems, lack of efficient diagnosis, quantification and genotyping of viral strains are the major hindrances in a management of this notorious epidemic. The knowledge of HCV genotype and an amount of virus in patient's blood are pre-requisites to determine the duration and method of treatment. In this review, we discuss the implications of HCV molecular diagnostic methods and their clinical applications. We conclude that while, several commercial and home-brewed methods are available for this purpose, and there is a visible vacuum for cost effective, robust, sensitive assays that can detect multiple viral genotypes in a single reaction. We are of the view that the level of sensitivity offered by Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) technique is unequivocal as compared to other techniques. Therefore, researchers may explore further possibilities using this technique in the management of HCV. Polyetheroaryl Oxadiazole/Pyridine-Based Ligands: A Structural Tuning for Enhancing G-Quadruplex Binding Filippo Doria, Valentina Pirota, Michele Petenzi, Marie-Paule Teulade Fichou, Daniela Verga, Mauro Freccero Subject: Chemistry, Medicinal Chemistry Keywords: G-quadruplex; oxadiazole/pyridine polyheteroaryls; G4-ligands; FRET-melting; G4-FID; circular dichroism Acyclic olygoheteroaryl-based compounds represent a valuable class of ligands for nucleic acid recognition. In this regard, acyclic pyridyl polyoxazoles and polyoxadiazoles were recently identified as selective G-quadruplex stabilizing compounds with high cytotoxicity and promising anticancer activity. Herein, we describe the synthesis of a new family of polyheteroaryl oxadiazole/pyridine-ligands targeting DNA G-quadruplexes. In order to perform a structure-activity analysis identifying determinants of activity and selectivity, we followed a convergent synthetic pathway to modulate the nature and number of the heterocycles (1,3-oxazole vs 1,2,4-oxadiazole and pyridine vs benzene). Each ligand was evaluated towards secondary nucleic acid structures, which have been chosen as a prototype to mimic cancer-associated G-quadruplex structures (e.g., the human telomeric sequence, c-myc and c-kit promoters). Interesting, heptapyridyl-oxadiazole compounds showed preferential binding towards the telomeric sequence (22AG) in competitive conditions vs duplex DNA. In addition, G4-FID assays suggest a different binding mode from the classical stacking on the external G-quartet. Additionally, CD titrations in the presence of the two most promising compounds for affinity, TOxAzaPy and TOxAzaPhen, display a structural transition of 22AG in K-rich buffer. This investigation suggests that the pyridyl-oxadiazole motif is a promising recognition element for G-quadruplexes, combining seven heteroaryls in a single binding unit. An Experimental Investigation of the Thermal and Economic Performance of PCM-Embedded Hybrid Water Heater under Saharan Climate Sidi Mohammed El Amine Bekkouche, Rachid Djeffal, Mohamed Kamal Cherier, Maamar Hamdani, Zohir Younsi, Saleh Al-Saadi, Mohamed Zaiani Subject: Engineering, Energy & Fuel Technology Keywords: hybrid DHW system; water heater; PCM; melting temperature; latent heat storage; produced water cost The solar water heater must be integrated into future residential buildings as the main energy source, which will subsequently reduce the energy cost of water heating. An original configuration for efficient Domestic Hot Water "DHW" storage tank is developed and experimentally evaluated under Saharan climate. This novel DHW configuration includes a hybrid (solar and electric) energy system with a flat plate solar collector coupled with an electric heater. Additionally, phase change material "PCM" mixture that is composed of paraffin wax and animal fat with a melting temperature between 35.58°C and 62.58°C and latent heat between 180 and 210 kJ/kg is integrated into this novel tank configuration. The experimental results indicated that hot water production by using latent heat storage could be economically attractive. In this proposed configuration, one liter of hot water may cost around 0.1362 DZD/liter (i.e., 0.00096 US$/liter) compared to 0.4431 DZD/liter for the conventional water heater, an average energy cost savings of 69.26%. On a yearly basis, the average energy cost savings may reach up to 80.25% if optimal tilt for the solar collector is adopted on a monthly basis. The flat plate collector may be vulnerable to convective heat transfer, and therefore, other solar collectors such as vacuum tube collectors may provide enhanced energy performance. Influence of High Temperature on the Fracture Properties of Polyolefin Fibre Reinforced Concrete Marcos G. Alberti, Jaime C. Gálvez, Alejandro Enfedaque, Ramiro Castellanos Subject: Engineering, Automotive Engineering Keywords: fracture behaviour; fibre reinforced concrete; high temperature; melting point; flexural tensile strength; polyolefin fibres Concrete has become the most common construction material showing among other advantages good behaviour when subjected to high temperatures. Nevertheless, concrete is usually reinforced with elements of other materials such as steel in the form of rebars or fibres. Thus, the behaviour under high temperatures of these other materials can be critical for structural elements. In addition, concrete spalling occurs when concrete is subjected to high temperature due to internal pressures. Micro polypropylene fibres (PP) have shown to be effective for reducing such spalling although this type of fibres barely improve any of the mechanical properties of the element. Hence, a combination of PP with steel rebars or fibres can be effective for the structural design of elements exposed to high temperatures. New polyolefin fibres (PF) have become an alternative to steel fibres. PF meet the requirements of the standards to consider the contributions of the fibres in the structural design. However, there is a lack of evidence about the behaviour of PF and elements made of polyolefin fibre reinforced concrete (PFRC) subjected to high temperatures. Given that these polymer fibres would be melt above 250 °C, the behaviour in the intermediate temperatures was assessed in this study. Uni-axial tests on individual fibres and three-point bending tests of PFRC specimens were performed. The results have shown that the residual load-bearing capacity of the material is gradually lost up to 200 °C, though the PFRC showed structural performance up to 185°C. Analytical Modeling of Residual Stress in Selective Laser Melting Considering Volume Conservation in Plastic Deformation Elham Mirkoohi, Dongsheng Li, Hamid Garmestani, Steven Y. Liang Subject: Engineering, Mechanical Engineering Keywords: Selective Laser Melting; residual stress; direct metal deposition; thermomechanical analytical modeling; Ti-6Al-4V Residual stress (RS) is the most challenging problem in metal additive manufacturing (AM) since the build-up of high tensile RS may influence the fatigue life, corrosion resistance, crack initiation, and failure of the additively manufactured components. While tensile RS is inherent in all the AM processes, fast and accurate prediction of stress state within the part is extremely valuable and would result in optimization of the process parameters in achieving a desired RS and control of the build process. This paper proposes a physics-based analytical model to rapidly and accurately predict the RS within the additively manufactured part. In this model, a transient moving point heat source (HS) is utilized to determine the temperature field. Due to the high temperature gradient within the proximity of the melt pool area, material experience high thermal stress. Thermal stress is calculated by combining three sources of stresses known as stresses due to the body forces, normal tension, and hydrostatic stress in a homogeneous semi-infinite medium. The thermal stress determines the RS state within the part. Consequently, by taking the thermal stress history as an input, both the in-plane and out of plane RS distributions are found from incremental plasticity and kinematic hardening behavior of the metal by considering volume conservation in plastic deformation in coupling with the equilibrium and compatibility conditions. In this modeling, material properties are temperature-sensitive since the steep temperature gradient varies the properties significantly. Moreover, the energy needed for the solid-state phase transition is reflected by modifying the specific heat employing the latent heat of fusion. Furthermore, the multi-layer and multi-scan aspects of metal AM are considered by including the temperature history from previous layers and scans. Results from the analytical RS model presented excellent agreement with XRD measurements employed to determine the RS in the Ti-6Al-4V specimens. Elham Mirkoohi, Dongsheng Li, Hamid Garmestani, and Steven Y. Liang Influence of Selective Laser Melting Technological Parameters on the Mechanical Properties of Additively Manufactured Elements Using 316L Austenitic Steel Janusz Kluczyński, Lucjan Śnieżek, Krzysztof Grzelak, Jacek Janiszewski, Paweł Płatek, Janusz Torzewski, Ireneusz Szachogłuchowicz, Krzysztof Gocman Subject: Engineering, Mechanical Engineering Keywords: 316L austenitic steel; selective laser melting; powder bed fusion; technological parameters; mechanical property characterization The main aim of this study is to investigate the optimization of the technological process for selective laser melting (SLM) additive manufacturing. The group of process parameters considered was selected from the first-stage parameters identified in preliminary research. Samples manufactured using three different sets of parameter values were subjected to static tensile and compression tests. The samples were also subjected to dynamic Split–Hopkinson tests. To verify the microstructural changes after the dynamic tests, microstructural analyses were conducted. Additionally, the element deformation during the tensile tests was analyzed using digital image correlation (DIC). To analyze the influence of the selected parameters and verify the layered structure of the manufactured elements, sclerometer scratch hardness tests were carried out on each sample. Basing on the research results it was possible to observe the porosity growth mechanism and its influence on the material strength (including static and dynamic tests). Parameters modifications that caused 20% lower energy density, elongation of the elements during tensile testing decreased twice, which was strictly connected with porosity growth. An increase of energy density by almost three times caused a significant reduction of force fluctuations differences between both tested surfaces (parallel and perpendicular to the building platform) during sclerometer hardness testing. That kind of phenomenon had been taken into account in the microstructure investigations before and after dynamic testing where it had been spotted a positive impact on material deformations based on fused material grains formation after SLM processing. Shape-memory Nanofiber Meshes with Programmable Cell Orientation Eri Niiyam, Kanta Tanabe, Koichiro Uto, Akihiko Kikuchi, Mitsuhiro Ebara Subject: Materials Science, Biomaterials Keywords: shape memory nanofiber; shape memory polymer; poly(ε-caprolactone); melting temperature; cell orientation; polyurethane This paper reports a rational design of temperature-responsive nanofiber meshes with shape-memory effect. The meshes were fabricated by electrospinning a poly(ε-caprolactone) (PCL)-based polyurethane with different contents of soft and hard segments. The effects of PCL diol/hexamethylene diisocyanate (HDI)/1,4-butanediol (BD) molar ratio in terms of the contents of soft and hard segments on the shape-memory properties were investigated. Although the mechanical property improved with increasing hard segment ratio, optimal shape-memory properties were obtained with a PCL/HDI/BD molar ratio of 1:4:3. At a microscopic level, the original nanofibrous structure was easily deformed into a temporary shape, and recovered its original structure when the sample was reheated. A higher recovery rate (>89%) was achieved even when the mesh was deformed up to 400%. Finally, the nanofiber meshes were used to control the alignment of human mesenchymal stem cells (hMSCs). The hMSCs aligned well along the fiber orientation. The proposed nanofibrous meshes with the shape-memory effect have the potential to serve as in vitro platforms for the investigation of cell functions as well as implantable scaffolds for wound-healing applications. Analytical Modeling of Three-Dimensional Temperature Distribution of Selective Laser Melting of Ti-6Al-4V Jinqiang Ning, Steven Y. Liang Subject: Engineering, Mechanical Engineering Keywords: Metallic Additive Manufacturing, Selective Laser Melting, Analytical Modeling, 3D Temperature Prediction, Molten Pool Dimension Selective laser melting (SLM) is one of the widely used techniques in metallic additive manufacturing, in which high-density laser powder is utilized to selectively melting layers of powders to create geometrically complex parts. Temperature distribution and molten pool geometry directly determine the balling effect, and concentrated balling phenomenon significantly deteriorates surface integrity and mechanical properties of the part. Finite element models have been developed to predict temperature distribution and molten pool geometry, but they were computationally expensive. In this paper, the three-dimensional temperature distributions are predicted by analytical models using point moving heat source and semi-ellipsoidal moving source respectively. The molten pool dimensions under various process conditions are obtained from the three-dimensional temperature predictions and experimentally validated. Ti-6Al-4V alloy is chosen for the investigation. Good agreements between the predictions and the measurements are observed. The presented models are also suitable for other metallic materials in the SLM process. Two-Steps Identification of N-, S-, R- and T-cytoplasm Types in Onion Breeding Lines using High Resolution Melting (HRM)-Based Markers Ludmila Khrustaleva, Mais Nzeha, Aleksey Ermolaev, Ekaterina Nikitina, Valery Romanov Subject: Life Sciences, Genetics Keywords: Cytoplasmic male-sterility; High resolution melting (HRM); molecular markers; mitochondrial genes; onion (Allium cepa L.) High resolution melting (HRM) analysis is a powerful detection method for fast, high-throughput post-PCR analysis. A two-step HRM marker system was developed for identification of the N-, S-, R- and T-cytoplasms of onion. In the first step for identification of N-, S-, and R-cytoplasms, one forward primer was designed to the identical sequences of both cox1 and orf725 genes and two reverse primers specific to the polymorphic sequences of cox1 and orf725 genes were used. For the second step breeding lines with N-cytoplasm were evaluated with primers developed from the orfA501 sequence to distinguish between N- and T-cytoplasms. An amplicon with primers to the mitocondrial atp9 gene was used as an internal control. The two-step HRM marker system was tested using 246 onion plants. HRM analysis showed that the most common source of CMS, often used by Russian breeders, is S-cytoplasm, the rarest type of CMS is R-cytoplasm, and the proportion of T-cytoplasm among the analyzed breeding lines was 20.5%. Investigations on Mechanical Properties of Lattice Structures with Different Values of Relative Density Made from 316L by Selective Laser Melting (SLM) Paweł Płatek, Judyta Sienkiewicz, Jacek Janiszewski, Fengchun Jiang Subject: Engineering, Mechanical Engineering Keywords: lattice structures; additive manufacturing; selective laser melting; powder bed fusion; energy absorption; dynamic compression; crashworthiness Nine variants of regular lattice structures with different relative densities have been designed and successfully manufactured. The produced structures have been subjected to geometrical quality control, and the manufacturability of the implemented selective laser melting SLM technique has been assessed. It was found that the dimensions of the produced lattice struts differ from those of the designed struts. These deviations depend on the direction of geometrical evaluation. Additionally, the microstructures and phase compositions of the obtained structures were characterized and compared with those of conventionally produced 316L stainless steel. The microstructure analysis and X-Ray Diffraction XRD patterns revealed a single austenite phase in the SLM samples. Both a certain broadening and a displacement of the austenite peaks were observed due to residual stresses and a crystallographic texture induced by the SLM process. Furthermore, the mechanical behavior of the lattice structure material has been defined. It was demonstrated that under both quasi-static and dynamic testing, lattice structures with high relative densities are stretch-dominated, whereas those with low relative densities are bending-dominated. Moreover, the linear relationship between the energy absorption and relative density under dynamic loading conditions has been defined Optimized Design of Modular Multilevel DC De-Icer for High Voltage Transmission Lines Jiazheng Lu, Qingjun Huang, Xinguo Mao, Yanjun Tan, Siguo Zhu, Yuan Zhu Subject: Engineering, Electrical & Electronic Engineering Keywords: converter; ice-melting; modular multilevel converter (MMC); optimization design; transmission line; static var generator (SVG) Ice covering on overhead transmission lines would cause damage to transmission system and long-term power outage. Among various de-icing devices, modular multilevel converter (MMC) based DC de-icer (MMC-DDI) is recognized as a promising solution due to its excellent technical performance. Its principle feasibility has been well studied, but few literature discuss its economy or hardware optimization, thus the designed MMC-DDI for high voltage transmission lines is usually too large and too expensive for engineering applications. To fill this gap, this paper presents a quantitative analysis on the converter characteristics of MMC-DDI, and calculates the minimal converter rating and its influencing factors. It reals that, for a given de-icing requirement, the converter rating varies greatly with its AC-side voltage. Then an optimization configuration is proposed to reduce the converter rating and improve its economy. The proposed configuration is verified in a MMC-DDI for a 500kV transmission line as a case study. The result shows, in the case of outputting same de-icing characteristics, the optimized converter rating is reduced from 151 MVA to 68 MVA, and total cost of MMC-DDI is reduced by 48%. This analysis and conclusion are conductive to the optimized design of multilevel DC de-icer, then to its engineering application. Electron-Phonon Coupling and Nonthermal Effects in Gold Nano-Objects at High Electronic Temperatures Nikita Medvedev, Igor Milov Subject: Physical Sciences, Condensed Matter Physics Keywords: Electron-phonon coupling; Nanoparticle; Ultrathin layer; Nonthermal melting; Tight-binding molecular dynamics; Boltzmann collision integrals; XTANT Laser irradiation of metals is widely used in research and applications. In this work, we study how the material geometry affects electron-phonon coupling in nano-sized gold samples: an ultrathin layer, nano-rod, and two types of gold nanoparticles: cubic and octahedral. We use the combined tight-binding molecular dynamics Boltzmann collision integral method implemented within XTANT-3 code to evaluate the coupling parameter in irradiation targets at high electronic temperatures (up to Te~20,000 K). Our results show that the electron-phonon coupling in all objects with the same fcc atomic structure (bulk, layer, rod, cubic and octahedral nanoparticles) is nearly identical at electronic temperatures above Te~7000 K, independently of geometry and dimensionality. At low electronic temperatures, reducing dimensionality reduces the coupling parameter. Additionally, nano-objects under ultrafast energy deposition experience nonthermal damage due to expansion caused by electronic pressure, in contrast to bulk metal. Nano-object ultrafast expansion leads to ablation/emission of atoms, and disorder inside of the remaining parts. These nonthermal atomic expansion and melting are significantly faster than electron-phonon coupling, forming a dominant effect in nano-sized gold. Influence of Powder Characteristics on the Microstructure and Mechanical Behaviour of GH4099 Superalloy Fabricated by Electron Beam Melting Shixing Wang, Shen Tao, Hui Peng Subject: Materials Science, Metallurgy Keywords: Ni-based superalloys; electron beam melting; additive manufacturing; Argon gas atomized; plasma rotation electrode process; powder characteristics A Chinese superalloy GH4099 (~20 vol.% γ' phase), which can operate for long periods of time at temperatures of 1173-1273 K, was fabricated by electron beam melting (EBM). Argon gas atomized (AA) and plasma rotation electrode process (PREP) powders with the similar composition and size distribution were used as raw materials for comparison. Microstructure and mechanical properties of both the as-EBMed and post-treated alloy samples were investigated. The results show that the different powder characteristics resulted in the different build temperatures for AA and PREP samples, which were 1253 K and 1373 K, respectively. With increasing the building temperature, the EBM processing window shifted towards the higher scanning speed direction. Furthermore, intergranular cracking was observed for the as-fabricated PREP sample as a result of local enrichment of Si at grain boundaries. The cracks were completely eliminated by hot isostatic pressing (HIPing) and did not re-open during subsequent solution treatment and aging (STA). Fine spherical γ' phase precipitated uniformly after STA. The tensile strength of the HIP+STA samples was ~920 MPa in the building direction and ~850 MPa in the horizontal direction, comparable with that of the wrought alloy. Influence of Preheating Temperature on Hardness and Microstructure of PBF Steel hs6-5-3-8 Jasmin Saewe, Markus Benjamin Wilms, Lucas Jauer, Johannes Schleifenbaum Subject: Keywords: LPBF; Laser Powder Bed Fusion; SLM; Selective Laser Melting; High-speed steel; tool steel; high carbon content; preheating temperature Laser powder bed fusion (LPBF) is an additive manufacturing process employed in many industries, for example for aerospace, automotive and medical applications. In these sectors, mainly nickel-, aluminum- and titanium-based alloys are used. In contrast, the mechanical engineering industry is interested in more wear-resistant steel alloys with higher hardness, both of which can be achieved with a higher carbon content, like in high-speed steels. Since these steels are susceptible to cracking, preheating needs to be applied during processing by LPBF. In a previous study, we applied a base plate preheating temperature of 500 °C for HS6-5-3-8 with 1.3 % carbon content. We were able to manufacture dense (p > 99.9 %) and crack-free parts from HS6-5-3-8 with a hardness > 62 HRC (as built) by LPBF. In this study, we investigate the influence of preheating temperatures up to 600 °C on hardness and microstructure dependent on part height for HS6-5-3-8. The microstructure was studied by light optical microscopy (LOM), scanning electron microscopy (SEM) and electron backscatter diffraction (EBSD). The analysis of hardness and microstructure at different part heights is necessary because state-of-the-art preheating systems induce heat only into the base plate. Consequently, parts are subjected to temperature gradients and different heat treatment effects depending on part height during the LPBF process. Development of Technology of Release of Iron and Its Oxidic Connections from Dump Steel-Smelting Slag Sokhibjon Turdaliyevich Matkarimov, Anvar Abdullayevich Yusupkhodjayev, Bakhriddin Berdiyarov Berdiyarov Subject: Materials Science, Metallurgy Keywords: slag; metallurgical dust; rolling scale; tails of dressing-works; iron; magnetite; fusion mixture; melting; arc steel-smelting furnace; production efficiency In article questions of development low-waste technologies of processing of steel-smelting slag are considered, gland allowing by extraction and its connections from steel-smelting slag to receive additional raw materials for production became, and the remains to use in building industry. Studying of gravitational methods of enrichment of steel-smelting slag and heat treatment the ore-fuel of pellets is the basis for work. Proceeding from it, in work modern physic-mechanical, chemical and physical and chemical methods of researches (UV-spectroscopy, electronic microscopy, the granulometric analysis) are used. Overhanging Features and the SLM/DMLS Residual Stresses Problem: Review and Future Research Need Albert E. Patterson, Sherri L. Messimer, Phillip A. Farrington Subject: Engineering, Industrial & Manufacturing Engineering Keywords: additive manufacturing; 3-D printing; metal additive manufacturing; selective laser melting; SLM; direct metal laser sintering; DMLS; metal powder processing Online: 4 April 2017 (07:56:07 CEST) A useful and increasingly common additive manufacturing (AM) process is the selective laser melting (SLM) or direct metal laser sintering (DMLS) process. SLM/DMLS can produce full-density metal parts from difficult materials, but it tends to suffer from severe residual stresses introduced during processing. This limits the usefulness and applicability of the process, particularly in the fabrication of parts with delicate overhanging and protruding features. The purpose of this study was to examine the current insight and progress made toward understanding and eliminating the problem in overhanging and protruding structures. To accomplish this, a survey of literature was undertaken, focusing on process modeling (general, heat transfer, stress and distortion, and material models), direct process control (input and environmental control, hardware-in-the-loop monitoring, parameter optimization, and post-processing), experiment development (methods for evaluation, optical and mechanical process monitoring, imaging, and design-of-experiments), support structure optimization, and overhang feature design; approximately 140 published works were examined. The major findings of this study were that a small minority of the literature on SLM/DMLS deals explicitly with the overhanging stress problem, but some fundamental work has been done on the problem. Implications, needs, and potential future research directions are discussed in-depth in light of the present review.
CommonCrawl
$ L^γ$-measure criteria for boundedness in a quasilinear parabolic-elliptic Keller-Segel system with supercritical sensitivity DCDS-B Home Analysis and computation of some tumor growth models with nutrient: From cell density models to free boundary dynamics July 2019, 24(7): 2989-3009. doi: 10.3934/dcdsb.2018296 Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises Min Niu 1, and Bin Xie 2,, Department of Applied Mathematics, School of Mathematics and Physics, University of Science and Technology Beijing, No. 30 Xueyuan Road, Haidian, Beijing 100083, China Department of Mathematical Sciences, Faculty of Science, Shinshu University, 3-1-1 Asahi, Matsumoto, Nagano 390-8621, Japan * Corresponding author: Bin Xie Received January 2018 Revised June 2018 Published October 2018 Fund Project: The first author is supported in part by NSF of China (No.11571030) and the second author is supported by JSPS KAKENH (No.16K05197) Two properties of stochastic heat equations driven by impulsive noises, which are also called Lévy space-time white noises, are mainly investigated in this paper. We first study the comparison theorem for two stochastic heat equations driven by same noises under some sufficient condition, which is proved via the application of Itô's formula. In particular, we obtain the non-negativity of solutions with non-negative initial data. Then, we investigate the positive correlation of the solutions as the application of the comparison theorem. We prove that the total masses of two solutions relative to two different stochastic heat equations with same noise become nearly uncorrelated after a long time. Keywords: Stochastic heat equation, impulsive noise, comparison theorem, non-negativity, correlation. Mathematics Subject Classification: Primary: 60H15; Secondary: 35R60, 60G51. Citation: Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296 S. Albeverio, J.-L. Wu and T.-S. Zhang, Parabolic SPDEs driven by Poisson white noise, Stochastic Process. Appl., 74 (1998), 21-36. doi: 10.1016/S0304-4149(97)00112-9. Google Scholar D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd Edition, Cambridge University Press, Cambridge, 2009. doi: 10.1017/CBO9780511809781. Google Scholar Z. Brzeźniak, W. Liu and J.-H. Zhu, Strong solutions for SPDE with locally monotone coefficients driven by Lévy noise, Nonlinear Anal. Real World Appl., 17 (2014), 283-310. doi: 10.1016/j.nonrwa.2013.12.005. Google Scholar Z. Brzeźniak and J. Zabczyk, Regularity of Ornstein-Uhlenbeck processes driven by a Lévy white noise, Potential Anal., 32 (2010), 153-188. doi: 10.1007/s11118-009-9149-1. Google Scholar L. Chen, D. Khoshnevisan and K. Kim, Decorrelation of total mass via energy, Potential Anal., 45 (2016), 157-166. doi: 10.1007/s11118-016-9540-7. Google Scholar H. Dadashi, Large deviation principle for semilinear stochastic evolution equations with Poisson noise, Infin. Dimens. Anal. Quantum Probab. Relat. Top., 20 (2017), 1750009, 29 pp. doi: 10.1142/S0219025717500096. Google Scholar K. A. Dareiotis and I. Gyöngy, A comparison principle for stochastic integro-differential equations, Potential Anal., 41 (2014), 1203-1222. doi: 10.1007/s11118-014-9416-7. Google Scholar C. Donati-Martin and E. Pardoux, White noise driven SPDEs with reflection, Probab. Theory Relat. Fields, 95 (1993), 1-24. doi: 10.1007/BF01197335. Google Scholar Z. Dong, L. H. Xu and X. C. Zhang, Exponential ergodicity of stochastic Burgers equations driven by α-stable processes, J. Stat. Phys., 154 (2014), 929-949. doi: 10.1007/s10955-013-0881-y. Google Scholar Z. Dong, J. Xiong, J. L. Zhai and T. S. Zhang, A moderate deviation principle for 2-D stochastic Navier-Stokes equations driven by multiplicative Lévy noises, J. Funct. Anal., 272 (2017), 227-254. doi: 10.1016/j.jfa.2016.10.012. Google Scholar M. Foondun and D. Khoshnevisan, Intermittence and nonlinear parabolic stochastic partial differential equations, Electron. J. Probab., 14 (2009), 548-568. doi: 10.1214/EJP.v14-614. Google Scholar M. Foondun and E. Nualart, On the behaviour of stochastic heat equations on bounded domains, ALEA Lat. Am. J. Probab. Math. Stat., 12 (2015), 551-571. Google Scholar T. Funaki and S. Olla, Fluctuations for $ \nabla \phi $ interface model on a wall, Stochastic Process. Appl., 94 (2001), 1-27. doi: 10.1016/S0304-4149(00)00104-6. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, 2nd Edition, North-Holland, Kodansha, 1989. Google Scholar I. Karatzas, S. E. Shreve, Brownian Motion and Stochastic Calculus, Second Edition, Graduate Texts in Mathematics, 113. Springer-Verlag, New York, 1991. xxiv+470. doi: 10.1007/978-1-4612-0949-2. Google Scholar P. Kotelenez, Comparison methods for a class of function valued stochastic partial differential equations, Probab. Theory Relat. Fields, 93 (1992), 1-19. doi: 10.1007/BF01195385. Google Scholar C. Marinelli and M. Röckner, Well-posedness and asymptotic behavior for stochastic reaction-diffusion equations with multiplicative Poisson noise, Electron. J. Probab., 15 (2010), 1528-1555. doi: 10.1214/EJP.v15-818. Google Scholar C. Mueller, On the support of solutions to the heat equation with noise, Stochastics Stochastics Rep., 37 (1991), 225-245. doi: 10.1080/17442509108833738. Google Scholar C. Mueller and D. Nualart, Regularity of the density for the stochastic heat equation, Electron. J. Probab., 13 (2008), 2248-2258. doi: 10.1214/EJP.v13-589. Google Scholar S. G. Peng and X. H. Zhu, Necessary and sufficient condition for comparison theorem of 1-dimensional stochastic differential equations, Stochastic Process. Appl., 116 (2006), 370-380. doi: 10.1016/j.spa.2005.08.004. Google Scholar S. Peszat and J. Zabczyk, Stochastic heat and wave equations driven by an impulsive noise, Stochastic Partial Differential Equations and Applications-VII, Lect. Notes Pure Appl. Math., 245 (2006), 229-242. doi: 10.1201/9781420028720.ch19. Google Scholar S. Peszat and J. Zabczyk, Stochastic Partial Differential Equations with Lévy Noise, An Evolution Equation Approach, Encyclopedia of Mathematics and its Applications, 113, 2007. xii+419 pp. doi: 10.1017/CBO9780511721373. Google Scholar T. Shiga, Two contrasting properties of solutions for one-dimensional stochastic partial differential equations, Canad. J. Math., 46 (1994), 415-437. doi: 10.4153/CJM-1994-022-8. Google Scholar Y.-L. Song and T.-G. Xu, Exponential convergence for some SPDEs with Lévy noises, Illinois J. Math., 60 (2016), 587-611. Google Scholar J. B. Walsh, An introduction to stochastic partial differential equations, Ecole d' Eté de Probabilités de Saint-Flour, XIV-1984, Lect. Notes Math., 1180, Springer, Berlin, (1986), 265–439. doi: 10.1007/BFb0074920. Google Scholar J.-L. Wu and B. Xie, On a Burgers type nonlinear equation perturbed by a pure jump Lévy noise in $ \mathbb{R}^d$, Bull. Sci. Math., 136 (2012), 484-506. doi: 10.1016/j.bulsci.2011.07.015. Google Scholar B. Xie, Impulsive noise driven one-dimensional higher-order fractional partial differential equations, Stoch. Anal. Appl., 30 (2012), 122-145. doi: 10.1080/07362994.2012.628917. Google Scholar J. H. Zhu and Z. Brzeźniak, Nonlinear stochastic partial differential equations of hyperbolic type driven by Lévy-type noises, Discrete Contin. Dyn. Syst. Ser. B, 21 (2016), 3269-3299. doi: 10.3934/dcdsb.2016097. Google Scholar Desmond J. Higham, Xuerong Mao, Lukasz Szpruch. Convergence, non-negativity and stability of a new Milstein scheme with applications to finance. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2083-2100. doi: 10.3934/dcdsb.2013.18.2083 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120 Tomás Caraballo, I. D. Chueshov, Pedro Marín-Rubio, José Real. Existence and asymptotic behaviour for stochastic heat equations with multiplicative noise in materials with memory. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 253-270. doi: 10.3934/dcds.2007.18.253 Kexue Li. Effects of the noise level on nonlinear stochastic fractional heat equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-24. doi: 10.3934/dcdsb.2019065 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Yueling Li, Yingchao Xie, Xicheng Zhang. Large deviation principle for stochastic heat equation with memory. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5221-5237. doi: 10.3934/dcds.2015.35.5221 Fulvia Confortola, Elisa Mastrogiacomo. Optimal control for stochastic heat equation with memory. Evolution Equations & Control Theory, 2014, 3 (1) : 35-58. doi: 10.3934/eect.2014.3.35 Guangying Lv, Hongjun Gao. Impacts of noise on heat equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-16. doi: 10.3934/dcdsb.2019105 Bixiang Wang. Random attractors for non-autonomous stochastic wave equations with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 269-300. doi: 10.3934/dcds.2014.34.269 Markus Riedle, Jianliang Zhai. Large deviations for stochastic heat equations with memory driven by Lévy-type noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1983-2005. doi: 10.3934/dcds.2018080 Henri Schurz. Stochastic heat equations with cubic nonlinearity and additive space-time noise in 2D. Conference Publications, 2013, 2013 (special) : 673-684. doi: 10.3934/proc.2013.2013.673 Nathan Glatt-Holtz, Roger Temam, Chuntian Wang. Martingale and pathwise solutions to the stochastic Zakharov-Kuznetsov equation with multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1047-1085. doi: 10.3934/dcdsb.2014.19.1047 Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3121-3135. doi: 10.3934/cpaa.2019140 Yalçin Sarol, Frederi Viens. Time regularity of the evolution solution to fractional stochastic heat equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 895-910. doi: 10.3934/dcdsb.2006.6.895 Keisuke Matsuya, Tetsuji Tokihiro. Existence and non-existence of global solutions for a discrete semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 209-220. doi: 10.3934/dcds.2011.31.209 Xiaowei Tang, Xilin Fu. New comparison principle with Razumikhin condition for impulsive infinite delay differential systems. Conference Publications, 2009, 2009 (Special) : 739-743. doi: 10.3934/proc.2009.2009.739 Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1 Yun Lan, Ji Shu. Dynamics of non-autonomous fractional stochastic Ginzburg-Landau equations with multiplicative noise. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2409-2431. doi: 10.3934/cpaa.2019109 Mathias Fink, Josselin Garnier. Ambient noise correlation-based imaging with moving sensors. Inverse Problems & Imaging, 2017, 11 (3) : 477-500. doi: 10.3934/ipi.2017022 HTML views (492) Min Niu Bin Xie
CommonCrawl
Ozone and childhood respiratory disease in three US cities: evaluation of effect measure modification by neighborhood socioeconomic status using a Bayesian hierarchical approach Cassandra R. O' Lenick1, Howard H. Chang2, Michael R. Kramer3, Andrea Winquist1, James A. Mulholland4, Mariel D. Friberg4 & Stefanie Ebelt Sarnat1 The Erratum to this article has been published in Environmental Health 2017 16:63 Ground-level ozone is a potent airway irritant and a determinant of respiratory morbidity. Susceptibility to the health effects of ambient ozone may be influenced by both intrinsic and extrinsic factors, such as neighborhood socioeconomic status (SES). Questions remain regarding the manner and extent that factors such as SES influence ozone-related health effects, particularly across different study areas. Using a 2-stage modeling approach we evaluated neighborhood SES as a modifier of ozone-related pediatric respiratory morbidity in Atlanta, Dallas, & St. Louis. We acquired multi-year data on emergency department (ED) visits among 5–18 year olds with a primary diagnosis of respiratory disease in each city. Daily concentrations of 8-h maximum ambient ozone were estimated for all ZIP Code Tabulation Areas (ZCTA) in each city by fusing observed concentration data from available network monitors with simulations from an emissions-based chemical transport model. In the first stage, we used conditional logistic regression to estimate ZCTA-specific odds ratios (OR) between ozone and respiratory ED visits, controlling for temporal trends and meteorology. In the second stage, we combined ZCTA-level estimates in a Bayesian hierarchical model to assess overall associations and effect modification by neighborhood SES considering categorical and continuous SES indicators (e.g., ZCTA-specific levels of poverty). We estimated ORs and 95% posterior intervals (PI) for a 25 ppb increase in ozone. The hierarchical model combined effect estimates from 179 ZCTAs in Atlanta, 205 ZCTAs in Dallas, and 151 ZCTAs in St. Louis. The strongest overall association of ozone and pediatric respiratory disease was in Atlanta (OR = 1.08, 95% PI: 1.06, 1.11), followed by Dallas (OR = 1.04, 95% PI: 1.01, 1.07) and St. Louis (OR = 1.03, 95% PI: 0.99, 1.07). Patterns of association across levels of neighborhood SES in each city suggested stronger ORs in low compared to high SES areas, with some evidence of non-linear effect modification. Results suggest that ozone is associated with pediatric respiratory morbidity in multiple US cities; neighborhood SES may modify this association in a non-linear manner. In each city, children living in low SES environments appear to be especially vulnerable given positive ORs and high underlying rates of respiratory morbidity. Ground-level ozone, a criteria pollutant regulated by the US Environmental Protection Agency (USEPA), is a potent airway irritant and well-known determinant of adverse health outcomes, including respiratory morbidity and mortality [1]. Increasing evidence suggests that intrinsic factors (e.g. age, sex, genetics), extrinsic factors (e.g. low socioeconomic status), and differential exposure among populations may potentiate susceptibility to the health effects of ambient ozone [2]. However, questions remain as to the degree of influence these factors exert on ozone-related health effects [3]. Intrinsically, children are considered more vulnerable than adults to the health effects of ozone due to their higher ventilation rates, a developing respiratory system, and time activity patterns that generally increase their exposures to ambient ozone. Concomitantly, physiological differences in airway structure and function cause greater doses of pollutants to be delivered into airways and predispose children to airway inflammation and obstruction [4–6]. Extrinsically, low socioeconomic status (SES) may exacerbate vulnerabilities among children through greater exposure to indoor and outdoor air pollutants, greater psychosocial stress associated with their home or neighborhood environments, and reduced access to vital resources including nutritious food and adequate health care [7–9]. However, findings to date have not conclusively identified SES as a modifier of ozone-related respiratory disease [2, 3]. Results from studies investigating modification of acute air pollution-health risk by neighborhood socioeconomic environments have been particularly inconsistent, reporting weak or contradictory results [10–27]. Among these studies, conclusions about effect modification by neighborhood SES differed depending on indicator choice within in the same study, [11, 15, 24, 25, 27] and differed between study locations even when the same neighborhood SES indicators were used [10, 11, 15, 26]. These observed incongruences call into question whether findings from individual studies, often conducted in single cities or communities, can be generalized. Previous findings from our research team in Atlanta identified robust associations between ground level ozone and pediatric respiratory health outcomes [27–33]. Analyses examining effect modification of ozone-related pediatric asthma ED visits by neighborhood-level SES suggested non-linear patterns of effect modification by neighborhood SES in Atlanta; for example, in some analyses we observed stronger associations between ozone and pediatric asthma ED visits in the highest and lowest SES strata and weaker associations in middle SES strata [27]. This pattern of effect modification could be partially responsible for the null and unanticipated patterns observed in previous studies. We also found that patterns of effect modification differed depending on our choice of SES indicator and choice of stratification criteria (e.g. median values versus quartile values). However, the generalizability of these findings to other study areas or other respiratory health outcomes has not been established. Several studies have utilized Bayesian hierarchical models to explore associations between air pollution and adverse health outcomes across multiple study locations, in a computationally efficient manner [34–38]. Furthermore, analyzing multicity data using Bayesian hierarchical models allows for assessment of factors that may help to explain between-location heterogeneity and ultimately ascertain population-level vulnerability factors [34, 35]. Here, we use a two-stage Bayesian hierarchal approach to examine effect modification of ozone-related pediatric respiratory disease by categorical and continuous measures of neighborhood SES in three diverse cities (Atlanta, Dallas, and St. Louis). By applying a consistent analytic approach we assess the generalizability of associations between ozone and pediatric respiratory disease across study areas and evaluate whether patterns of effect modification differ by city. Emergency department visit data Multi-year ED visit data were collected from three diverse study locations, which included the metropolitan areas of Atlanta, Dallas, and St. Louis. These data have been used previously in air pollution health effects investigations [18, 33, 39, 40]. For the current analysis, daily ED visit data were available for 2002–2008 from 41 hospitals in 20-county Atlanta; data through 2004 were collected from individual hospitals directly while 2005–2008 data were collected through the Georgia Hospital Association. Daily ED data were available for 2006–2008 from the Dallas-Fort Worth Hospital Council Foundation for 36 hospitals in the 12-county Dallas metro area. In St. Louis, daily ED data were available for 2002–2007 from the Missouri Hospital Association for 36 hospitals in the 16-county metro area. Daily ED visits for respiratory outcomes (upper respiratory infections, bronchiolitis, pneumonia, asthma, and wheeze) were identified using primary International Classification of Diseases, 9th Revision (ICD-9) codes 460–486, 493, 786.07. We restricted our analyses to the pediatric population (5–18 years old) and to patients with a residential ZIP code located wholly or partially in 20-county Atlanta (232 ZIP codes), 12-county Dallas (271 ZIP codes), or 16-county St. Louis (264 ZIP codes). The Emory University Institutional Review Board approved this study and granted exemption from informed consent requirements. To create spatial scales compatible with air quality and census-based data, each ZIP code in the ED visit database was assigned to a 2010 Zip Code Tabulation Area (ZCTA, Census Bureau boundaries, created from census blocks to approximate ZIP codes). Assignments were accomplished by matching each ZIP code to a 2010 ZCTA based on 5-digit Census ID numbers. ZIP code change reports helped facilitate ZCTA assignments for ZIP codes that were altered or eliminated during the study period. ZCTAs that were classified as businesses or university campuses were excluded from the study. The resulting study areas included 191 ZCTAs in Atlanta, 253 ZCTAs in Dallas, and 256 ZCTAs in St. Louis. Neighborhood-level socioeconomic data Estimates of ZCTA-level socioeconomic status (SES) were obtained from the 2000 US Census long form and the American Community Survey (ACS) 5-year (2007–2011) summary file, all normalized to 2010 ZCTA borders ("The Time-Series Research Package", GeoLytics, Inc., East Brunswick, NJ, 2013). In our analyses, ZCTA boundaries were used to represent neighborhoods of patient residence and yearly values of neighborhood-level (i.e. ZCTA-level) SES were estimated by linear interpolation of Census 2000 and ACS 2007–2011 values. We then averaged the yearly values across the study periods of each city (2002–2008 in Atlanta; 2006–2008 in Dallas, and 2002–2007 in St. Louis) to estimate a mean SES value for each neighborhood. To represent neighborhood-level SES, we chose percentage (%) of the population (≥25 years old) with less than a 12th grade education (% < 12th grade), % of households living below the poverty line (% below poverty), and the Neighborhood Deprivation Index (NDI), a composite index comprised of 8 single indicators of SES (i.e. % household low income (<$30,000), % males not in management, % <12th grade, % of households living below the poverty line, % female headed households, % living in crowding, % households on public assistance, and % unemployed civilian population) that were summarized using principle components analysis [41]. To enable comparison of results across different SES indicators analyses were performed for all indicators of neighborhood SES used in this study (% < 12th grade education, % below poverty, NDI). Ambient ozone concentration data Our study used daily estimates of ambient 8-h maximum ozone for each ZCTA in Atlanta, Dallas, and St. Louis. Daily concentrations of ambient 8-h maximum ozone were estimated by combining observational data from network monitors in each city with pollutant concentration simulations from an emissions-based chemical transport model, the Community Multi-Scale Air Quality version 4.5 (CMAQ) model at 12×12 km grids over Atlanta, Dallas, and St. Louis [42]. Ozone concentrations were estimated for each ZCTA by determining the fraction of a ZCTA's area within each 12×12 km grid cell and area-weighting the observation-simulation data fusion estimates to get the ZCTA-specific value. Although a 12×12 km grid is a relatively large area to assess exposure to air pollutants, ozone is a spatially homogenous secondary pollutant and concentrations are unlikely to vary substantially over the 12×12km grids used in each city. We specifically chose ambient ozone and our exposure modeling approach to minimize the potential for exposure measurement error in each city. Daily meteorological data were obtained from National Climatic Data Centers at Atlanta Hartsfield International Airport, Dallas/Ft. Worth International Airport, and St. Louis Lambert International Airport. We applied a two-stage modeling approach to estimate associations between daily ZCTA-specific ozone concentrations and pediatric respiratory ED visits, as well as to evaluate effect modification by neighborhood SES across multiple locations. In the first stage (Stage 1), associations between 3-day moving average (lag days 0–2) ZCTA-specific ozone concentrations and pediatric respiratory disease were estimated for every ZCTA in Atlanta, Dallas, and St. Louis in time-stratified case-crossover analyses using conditional logistic regression, matching on year, month, and day of the week of the ED visit. We chose a 3-day moving average of ozone as our a priori lag structure based on previous work [27, 28, 43]. We included additional control for time-varying factors: indicator variables for season (4-levels), periods of hospital participation and holidays; cubic polynomials for 3-day moving average (lags 0–2) maximum temperature and mean dew point; interaction terms between season and maximum temperature; and a cubic spline on day of year (5° of freedom) to control smoothly for recurrent within-window seasonal trends. The general structure of each Stage 1, ZCTA-specific model was: $$ \begin{array}{l} Logit\;\left[ pr\left( Ykt=1\right)\right]=\upbeta 0+{\displaystyle {\sum}_{k=1}^x}{\zeta}_k{\mathrm{V}}_k+\upbeta (ozonetz)+{\varSigma}_{\mathrm{S}}{\varOmega}_{\mathrm{S}}(seasonts)+{\varSigma}_m{\lambda}_m? m? m\\ {}(DOWtm)+{\varSigma}_{\mathrm{n}}{\nu}_{\mathrm{n}}\left(\mathrm{hosp}\_{\mathrm{period}}_{\mathrm{t}\mathrm{n}}\right) + \mathrm{g}\left(\gamma 1,\dots,\ \gamma \mathrm{n};{\mathrm{t}\mathrm{ime}}_{\mathrm{t}}\right)+{\varSigma}_{\mathrm{q}}{\psi}_{\mathrm{q}}\left({\mathrm{meteorology}}_{\mathrm{t}\mathrm{q}}\right)\end{array} $$ where, Ykt indicates whether person k had the event at time t (1 = event; 0 = no event) and t indexes the event and control days. Vk denotes the indicator variables that distinguish the case–control sets for the various individuals, x is the total number of case–control sets, and ζk denotes parameters specific to the case–control sets (which are not estimated in conditional logistic regression). We defined ozonetz, as the ozone exposure for subject k at time t in ZCTA z. Other model covariates included indicator variables for season (4-levels), day of week and holidays (DOW), and indicator variables (hosp_period) for periods of hospital participation during the study period. By design, the case-crossover approach controls for individual time-invariant confounders since case and control days are compared for the same person. We also note that the above model assumes (1) pediatric respiratory disease ED visits for different individuals are independent, conditional on the variables in the model, (2) all confounder effects are ZCTA-specific, and (3) a linear association between ambient ozone concentrations and the log odds of a pediatric respiratory disease ED visit. Using Eq. 1 (Stage 1), we estimated the log odds ratio, \( {\widehat{\beta}}_Z, \) of ozone on respiratory disease for ZCTA z, and its estimated variance, \( {\widehat{V}}_Z \). Stage 1 models with fewer than 50 total ED visits per ZCTA during the study period did not converge; therefore, these ZCTAs were excluded from the second stage (Stage 2) of our modeling approach. In Stage 2, we fit two-level Bayesian hierarchical models via the R package 'TLnise' with noninformative priors [44]. Similar to a meta-regression analysis, ZCTA-specific effect estimates (log odds ratios, \( {\widehat{\beta}}_Z, \)) were combined to generate city-specific estimates of the short-term association between ozone and pediatric respiratory ED visits, accounting for (1) uncertainty associated with each ZCTA-specific log odds ratio as measured by its asymptotic standard error, and (2) between-ZCTA variability of the true unobserved ZCTA-specific log odds ratio [35, 45, 46]. Specifically, we fit the following Bayesian hierarchical model in Stage 2 analyses: $$ \begin{array}{c}\hfill {\beta}_z\ \Big|\ {\theta}_z,{\widehat{V}}_z \sim N\left({\theta}_z,{\widehat{V}}_z\right)\hfill \\ {}\hfill {\theta}_z\Big|\ {\alpha}_0,\ \gamma,\ {\tau}^2 \sim N\left({\alpha}_0+{\displaystyle \sum_j}{\gamma}_j{X}_{z j},\ {\tau}^2\right)\hfill \end{array} $$ θz = the unobserved true log odds ratio in each ZCTA Xzj = ZCTA-specific values of ZCTA-level covariates (j) in ZCTA z α0 = the average log odds ratio for ZCTAs ϒj = the change in the log odds ratio for a change in Xzj τ2 = heterogeneity variance across ZCTAs of the unobserved log odds ratio, θz, unexplained by ZCTA-level characteristics, Xzj. τ reflects the standard deviation and is the parameter we used to assess whether ZCTA-level characteristics explained heterogeneity in the effect of ozone on pediatric respiratory disease across ZCTAs. Modeling assumptions of the Bayesian Hierarchical meta-regression include: (1) ZCTA-specific coefficients are independent and normally distributed with a common heterogeneity variance; and (2) the effect of ZCTA-level covariates on ozone-related respiratory disease is the same for each city (when pooling data from all three cities). To estimate overall associations between ozone and pediatric respiratory disease, we used Eq. 2 to fit 'combined' meta-regressions which pooled data from all three cities (535 ZCTAs) and included indicator variables for each city, represented by Xzj in Eq. 2 [i.e. X(535 x 3) = (XAtlanta(z), XDallas(z), XSt. Louis(z)]. When estimating overall associations for each city, we do not include an intercept in the modeling equation. This fitted model is equivalent to one with an intercept and indicator variables for two cities. In secondary analyses, we used Eq. 2 to fit "city-specific" meta-regressions which pooled ZCTA-specific data from each city individually (179 ZCTAs in Atlanta; 205 ZCTAs in Dallas; and 151 ZCTAs in St. Louis). To examine modification of ozone-related respiratory disease by neighborhood SES, we further included Xzj covariates in Eq. 2 that characterized ZCTAs with respect to their SES. In these analyses, ZCTAs of extremely low SES were identified using the following SES indicators: 'undereducated area (yes/no)' [≥25% of the population aged at least 25 years with <12th grade education; 'poverty area status (yes/no)' (≥20% of households living below the federal poverty line); and 'above the 90th percentile of the NDI (yes/no)'. We also characterized ZCTAs by continuous values of SES and examined linear and non-linear effect modification through linear, quadratic, and cubic functions of neighborhood SES (indicated by continuous values of % <12th grade education, % below poverty, and the NDI). For our main effect modification analyses we fit 'combined' meta-regressions with the assumption that the effect of neighborhood SES on ozone-related respiratory disease is the same for each city. In combined models, Xzj covariates included an intercept, two indicators for city of residence (Dallas and St. Louis), and categorical or continuous ZCTA-level SES. For example, the Xzj matrix from a combined meta-regression examining effect modification by linear % below poverty was X(535 x 4) = (1, XDallas(z), XSt. Louis(z), X%poverty(z)), where '1' is the intercept and represents a ZCTA in Atlanta with 0% poverty. Consequently, all associations reported from combined models are interpreted as a summary estimate of effect modification by neighborhood SES based on data from three cities. To demonstrate the methods we used to fit the combined meta-regression, we have included an R code and example dataset as Additional files 2 and 3 (example data are not real but are similar in magnitude and structure to the output from the case-crossover analyses in Stage 1). In secondary analyses we assessed deviation from our assumption that the effect of neighborhood SES on ozone-related respiratory disease is the same for each city by fitting separate, 'city-specific' meta-regressions, which pooled ZCTA-specific data from each city individually. In doing so, the effect of neighborhood SES on ozone-related respiratory disease was estimated separately for each city. All associations between ozone and pediatric respiratory disease are reported as odds ratios (OR) and 95% posterior intervals (PI) scaled to a 25 ppb increase in ozone. Model parameter estimates were considered significant if the absolute value of the estimate divided by its posterior standard error was greater than 1.96 (analogous to a Z-score). All analyses were performed using SAS 9.4 (SAS Institute, Cary, NC) and R version 3.2.2 (R Foundation for Statistical Computing, Vienna, Austria). Graphical representation of data To compliment our main analyses, we plotted ZCTA-specific ORs in figures and spatial maps. ZCTA-specific ORs were estimated by linear combination of Xzj model coefficients and are interpreted as the estimated "mean" OR for each ZCTA. Estimated mean ZCTA-specific ORs were plotted onto spatial maps to help identify other variables that may be spatially correlated with ZCTA-level SES. It is possible that other variables may explain apparent effect modification by ZCTA-level SES and the mapping exercise was used to help generate hypotheses regarding potential confounders. Spatial maps were generated in ArcGIS® version 10.4.1 (Environmental Systems Research Institute, Redlands, CA, USA, 2015). Three cities characterization The three study sites assessed in this analysis are large, urban cities located in three distinct US regions: the Southeast (Atlanta), Southwest (Dallas), and Midwest (St. Louis). Table 1 presents descriptive statistics for each study site including mean temperature, number of ozone monitors, ozone concentration, and socioeconomic composition of the population. Table 1 Descriptive statistics of temperature, ozone concentrations, and population socioeconomic composition in each city Daily mean temperatures during the study period were on average higher in Dallas (68.8 F) compared to Atlanta (63.1 F) and St. Louis (57.9 F). On average, Atlanta and Dallas had slightly greater daily concentrations of ozone across their respective study periods (42.2 and 42.0 ppb) compared to St. Louis (40.0 ppb). With regard to socioeconomic composition, Dallas had the highest mean values of % below poverty (14.0%) and % <12th grade education (17.5%) across ZCTAs, indicative of lower SES neighborhoods, on average, in Dallas compared to Atlanta and St. Louis. Additional file 1: Figure S1 presents additional summary statistics and density distribution plots of % <12th grade, % below poverty, and the NDI for each city. Note NDI values were standardized to mean neighborhood deprivation in each city, hence means of 0 and standard deviations of 1 in each city. Pediatric respiratory ED visits Our complete ED visit database for respiratory disease among children aged 5–18 years included 211 530 ED visits during the years 2002–2008 in Atlanta, 96 983 ED visits during the years 2006–2008 in Dallas, and 113 285 ED visits during the years 2002–2007 in St. Louis. Due to model convergence issues in the first stage of our analysis, we excluded all ZCTAs that reported fewer than 50 ED counts over their respective study periods. This resulted in the exclusion of 12 ZCTAs in Atlanta, 48 ZCTAs in Dallas, and 105 ZCTAs in St. Louis; these ZCTAs contributed very few ED visits to our overall study and the exclusion of these ZCTAs from analyses resulted in less than 2% of the total number of ED visits from each city to be excluded. Figure 1 presents maps of the included and excluded ZCTAs of the Atlanta, Dallas, and St. Louis study areas. Table 2 summarizes differences in ED data between our complete ED database and the analytical ED database, restricted to data from ZCTAs with at least 50 ED counts. Study area maps for main analyses. Gray areas represent the ZCTAs included in analyses (≥50 respiratory disease ED visits). Hash mark areas represent excluded ZCTAs (<50 respiratory disease ED visits). a represents the Atlanta study area; b represents the Dallas study area; c represents the St. Louis study area. Abbreviations: ED, Emergency Department; ZCTA, ZIP Code Tabulation Area Table 2 Summary of respiratory ED visits+ among 5–18 year-olds in Atlanta, Dallas, and St. Louis Epidemiological results: association between ozone and pediatric respiratory disease The combined meta-regression, which pooled data from all three cities (535 ZCTAs), and city-specific meta-regressions, which pooled ZCTA-specific data from each city individually (179 ZCTAs in Atlanta; 205 ZCTAs in Dallas; and 151 ZCTAs in St. Louis), produced identical overall associations between ozone and pediatric respiratory disease. Ozone exhibited the strongest overall association with pediatric respiratory disease in Atlanta [(OR = 1.08 (95% PI = 1.06, 1.11)], followed by Dallas [OR = 1.04 (95% PI = 1.01, 1.07)] and St. Louis (OR = 1.03 (95% PI = 0.99, 1.07)]. Epidemiological results: effect measure modification Categorical effect modification Categorical ZCTA-level variables were used in Stage 2 of our modeling approach to assess effect measure modification by neighborhood SES (undereducated area, poverty area, >90th percentile NDI). We did not observe differences in associations between ozone and pediatric respiratory ED visits by undereducated area status when using combined or city-specific models (Fig. 2a). However, when assessing other indicators of neighborhood SES, we observed stronger associations between ozone and pediatric respiratory ED visits in poverty areas for all cities in both the combined and city-specific meta-regressions (Fig. 2b) and stronger associations in areas designated as above the 90th percentile of the NDI with the exception of Dallas in city-specific models (Fig. 2c). These differences in association between SES strata were not statistically significant; however, associations in low SES groups had very wide posterior intervals resulting from very few ZCTAs designated as extremely low SES (Additional file 1: Table S1). Effect modification by categorical indicators of neighborhood SES using combined and city-specific models. a: association between ozone and pediatric respiratory ED visits in undereducated areas (low SES) and non-undereducated areas (high SES). b: association between ozone and pediatric respiratory ED visits in poverty areas (low SES) and non-poverty areas (high SES). c: association between ozone and pediatric respiratory ED visits in areas above the 90th percentile of the NDI (low SES) and in areas below the 90th percentile (higher SES). Odds ratios and 95% posterior intervals per 25 ppb ozone are presented. Black points and error bars represent ORs and 95% PIs in low SES areas; gray points and bars represent ORs and 95% PIs in areas of higher SES. Undereducated areas: ≥ 25% the adult population (≥25 years old) with less than a 12th grade education. Poverty area: ≥ 20% households living below the Federal Poverty Line. Abbreviations: ED, Emergency Department; NDI, Neighborhood Deprivation Index; SES, socioeconomic status; ZCTA, Zip Code Tabulation Area Linear and non-linear effect modification For each city, linear and non-linear effect modification by neighborhood SES was evaluated through the use of linear, quadratic, and cubic functions of % <12th grade education, % below poverty, and the NDI. We present results from combined and city-specific models for estimated ORs across the entire range of neighborhood SES values in each city; interpretations of these results were based on estimated ORs for SES variable values falling between the 2.5th and 97.5th percentiles of neighborhood SES due to data sparseness at the extremes of the SES distributions outside of these bounds. In combined models, estimated ORs tended to increase with decreasing SES, regardless of the continuous function specified in models (linear, quadratic, cubic); this pattern was observed across all SES indicators and in each city (Fig. 3). In Atlanta, robust associations between ozone and pediatric respiratory disease were observed regardless of the socioeconomic environment in which children live. In Dallas and St. Louis, significantly positive estimated ORs were only observed in areas that are characterized as low to very low SES (i.e. above approximately 16% below poverty in Dallas and 20% below poverty in St. Louis). However, in many models specified with quadratic or cubic functions of SES we also observed a decrease in the magnitude of estimated ORs at the lowest extremes of the SES distribution (Fig. 3). Associations between ozone and pediatric respiratory ED visits by continuous neighborhood SES. Combined meta-regressions were used to examine effect modification of the association between ozone and pediatric respiratory disease by neighborhood SES. Linear, quadratic, and cubic functions of % <12th grade education (a), % below poverty (b), and the NDI (c) were included in combined meta-regressions to examine linear and non-linear effect modification. Solid black lines represent estimated ORs between ozone and pediatric respiratory disease ED visits by ZCTA-specific values of neighborhood SES. Gray polygons represent 95% PIs of the estimated ORs. Histograms below each plot represent the distribution of ZCTA-specific SES values in each city. Dotted black lines represent the 2.5th and 97.5th percentile values of neighborhood SES in each city. The y-axis scale on the right side of each graph represents the frequency count of ZCTAs. Abbreviations: ED, Emergency Department; NDI, Neighborhood Deprivation Index; OR, odds ratio; PI, Posterior Intervals; SES, socioeconomic status; ZCTA, Zip Code Tabulation Area. Plots adapted from Gaspirrini et al., 2015 [53]. R code for plots available at https://github.com/gasparrini/2015_gasparrini_Lancet_Rcodedata [54] In combined models, we found no evidence of linear effect modification by neighborhood SES, but found some evidence of non-linear effect modification. Specifically, the parameter estimate for the cubic function of the NDI was nearly significant at the 0.05 level (P = 0.052, 2-tailed) and the estimated mean ORs varied across NDI levels in a non-linear manner (Fig. 3). Note that in combined models, the relative similarity across cities in linear and non-linear patterns of effect modification reflects the underlying assumption that the effect of neighborhood SES on ozone-related respiratory disease is the same in each city. To assess deviation from this assumption, we also fit city-specific models (Additional file 1: Figure S2). In city-specific analyses, patterns of estimated ORs generally reflected those of the combined models, however, some qualitative differences were observed. The differences between combined and city-specific models were primarily observed when comparing the shape of the nonlinear curve from models fit with quadratic functions of neighborhood SES. For example, when combined and city-specific models were fit with quadratic functions of neighborhood SES, estimated ORs in Dallas followed an inverted U-shape across levels of SES that was not observed in the other cities; however, this pattern was much more dramatic in city-specific models compared to the combined model (Additional file 1 Figure S2). Although our assessment suggested effect modification by neighborhood SES, inclusion of neighborhood SES in both combined and city-specific models did not substantively explain variability in the unobserved true effect of ozone across ZCTAs as measured by the between-ZCTA heterogeneity parameter, τ (results not shown); these findings imply unexplained heterogeneity across ZCTAs and warrant further inquiry. Spatial mapping and risk visualization Spatial mapping, in the context of this study, was used to generate hypotheses about spatial influences and assess potential confounding of apparent effect modification by neighborhood SES. To visually and qualitatively explore spatial patterning, we transferred estimated mean ZCTA-specific ORs from combined models that included cubic functions of the NDI (Fig. 3c) onto spatial maps (Fig. 4). The spatial maps presented in Fig. 4 reveal possible spatial patterning of the ORs and this mapping exercise allowed us to qualitatively assess commonalities among cities and consider possible alternative modifiers of ozone-related respiratory morbidity. For example, ORs appear stronger in areas clustered near urban centers and along major roadways, suggesting common areas of concern in each city. Based on these observations, in secondary analyses we tested whether ZCTAs that included an interstate highway had significantly stronger associations between ozone and respiratory disease; however, we did not find evidence of effect modification by ZCTAs that included an interstate highway (results not shown). Given how we estimated the spatial distribution of ambient ozone (using a regional transport model to interpolate between observations at regulatory ambient monitor sites) we were limited in our ability to detect effect modification associated with nearness to major roadways. Spatial representation of estimated mean ORs accounting for ZCTA-specific NDI values in each city. In Fig. 4, average ORs between ozone and respiratory disease accounting for ZCTA-specific NDI values were estimated for each ZCTA in Atlanta (a), Dallas (b), and St. Louis (c) using a combined model that included a cubic function of the NDI. Abbreviations: NDI, Neighborhood Deprivation Index; OR, odds ratio; SES, socioeconomic status; ZCTA, ZIP Code Tabulation Area From our mapping exercise we also observed distinct patterns of clustering in each city (e.g. a cluster of high ORs in southwest St. Louis) that may be influencing patterns of effect modification; these differences may be related to patterns of urban development and socio-demographic clustering unique to each city and future analyses could consider performing cluster analyses. In this study, we assessed the short-term effects of ozone on respiratory ED visits among children in three US cities. We used a 2-stage Bayesian hierarchical approach to examine modification by neighborhood SES and we used information from three cities to improve the representativeness of our results. Our methodology is similar to previous work in this field, but extends that work in two key ways: (1) we specifically focused our meta-regression on ozone-related respiratory disease in the pediatric population, a subpopulation with known sensitivities; and (2) by pooling effects at the ZCTA-level (instead of the city or county-level as is commonly done [34–38]), we were able to quantitatively and qualitatively (through spatial mapping) assess socioeconomic influences at a finer scale resolution than was done previously. Our findings add new insights, and new questions, to the burgeoning knowledge base on neighborhood socioeconomic modifiers of air pollution-health effects. In overall analyses we observed statistically significant associations between 3-day average concentrations of ozone and pediatric respiratory disease in Atlanta and Dallas. Associations were non-significant in St. Louis, but were similar in magnitude to observed associations in Dallas. These results and their respective magnitudes of association are in line with our previous findings from these cities [18, 29, 33, 40] and with work by others on ozone related respiratory disease [14, 19, 47, 48]. A primary objective of our study was to examine effect modification by neighborhood SES in each city and to evaluate whether patterns of effect modification differed by city. We primarily assessed effect modification through the use of combined meta-regressions that pooled information across ZCTAs in our three cities. By combining information from all ZCTAs we were able to more generally assess the presence of linear and non-linear effect modification across study areas. Another advantage of the combined model approach was greater power to detect effect modification versus city-specific models that had fewer ZCTAs contributing data; however, combined models forced the effect of neighborhood SES on ozone-related respiratory disease to be uniform across all cities. Because neighborhood SES may represent a confluence of extrinsic vulnerability factors and because these factors may differ by city, this is a strong assumption and therefore we also fit city-specific models to assess this assumption. Comparison of results from combined and city-specific models did not yield substantially different interpretations; in fact, patterns of effect modification were largely similar across cities and observed differences could have been due to limited power in city-specific models as well as observed sensitivity of the city-specific models to sparse data at extreme values of neighborhood SES. Therefore, results from combined meta-regressions were used to facilitate interpretations. In each city, results from combined meta-regressions fit with categorical SES indicators suggested stronger associations between ozone and pediatric respiratory disease in neighborhoods characterized as poverty areas and in neighborhoods above 90th percentile values of the NDI. However, differences between groups were not statistically significant due to wide posterior intervals. Similar patterns were found in Atlanta and St. Louis in previous studies that examined neighborhood SES as a modifier of associations between air pollution and pediatric asthma [18, 27, 49]. When using undereducated area (yes/no) to indicate SES we did not observe differences between strata, suggesting that observed effect modification depended on the way in which neighborhood SES is measured. In combined meta-regressions fit with continuous values of neighborhood SES, we found some evidence of non-linear patterns of effect modification across levels of SES, particularly for the NDI; overall, these results reflected those observed with categorical indicators of SES in that ORs tended to increase with decreasing neighborhood SES. Our investigation of modification by continuous SES also resulted in the following key observations: (1) we observed robust associations between ozone and pediatric respiratory disease in Atlanta regardless of the socioeconomic environment in which children live (i.e. nearly all ZCTA-specific ORs were significantly positive between the 2.5th and 97.5th percentiles of neighborhood SES). However, in both Dallas and St. Louis, significantly positive associations between ozone and pediatric respiratory disease were only observed in areas that are characterized as low to very low SES (i.e. between the 75th and 95th percentile of neighborhood SES); and (2) in some analyses we observed weak associations in the lowest SES neighborhoods [i.e., neighborhoods at or above the 95th percentile of % below poverty (the extreme right-tail of the SES distribution)]. Non-linear effect modification by continuous neighborhood SES has not been examined previously and findings from this study add to the knowledge base on neighborhood SES as a modifier of air pollution-respiratory disease associations among children. While stronger associations between ozone and respiratory disease have been consistently observed in children compared to adults, [2, 14, 33] the evidence on extrinsic factors (e.g. low socioeconomic status) and their potential to modify ozone-health associations is limited. A recent systematic review by Vinikoor-Imler et al. designates the weight of evidence, regarding neighborhood SES as a modifier as suggestive only, citing "inconsistencies within a discipline" or "lack of coherence across disciplines" as reasons for not being able to make more definitive inferences [2]. Our results suggest potential non-linearity in effect modification, different patterns of effect modification depending on choice of neighborhood SES indicator, and possible spatial patterning of risk. The non-linear patterns and different findings with different SES indicators may account for some of the inconsistencies observed in the studies reviewed by Vinikoor-Imler et al. Our results also raise additional questions worthy of investigation. For example, why are mean ZCTA-specific ORs weak in the lowest SES neighborhoods? These observations are in stark contrast with our intuition and belief that children from impoverished neighborhoods would be more vulnerable to the respiratory effects of ozone, compared to children living in wealthy neighborhoods. Our study is not designed to answer this question directly, but one possible reason for this observation may be that children living in wealthier neighborhoods have few component causes of air pollution-health effects; therefore, ozone has a substantial relative influence (i.e. a large piece of the 'causal pie') on air pollution-health associations [50]. Whereas children in living in lower SES neighborhoods may have a multitude of exposures that could exacerbate respiratory disease, and ozone is only one of many factors (i.e. exposure to ozone constitutes a small piece of the 'causal pie'). Another plausible reason for having observed weaker associations in low SES populations may be due to our use of multiplicative models and the mathematical scale of effect measures. While multiplicative models are used in the vast majority of air pollution-health studies, [3, 51] the true nature of the effect of ozone on ED visits may be additive. In our own data, we observed a marked increase in ED rates from high SES to low SES in each city and for each SES indicator (Fig. 5). Assuming additive effects, low baseline risk could explain stronger relative effects of ozone in the highest SES populations and apparent weaker relative effects in the lowest SES populations [10, 27]. However, in many analyses we observed strong, positive associations in low SES areas, which may reflect supra-additive effects of SES and ozone [27]. While there are methods for estimating additive interaction based on results of multiplicative models (e.g. the Relative Excess Risk due to Interaction (RERI) and the Synergy Index), these methods cannot be straightforwardly applied to our models, and the validity of applying these methods to models with multiple covariates and a continuous exposure is uncertain. Annual mean ED visit rates by neighborhood SES for each ZCTA in each city. Respiratory disease ED rates are reported per 1000 children (5–18 years old) and were calculated for each ZCTA by dividing the annual total number of respiratory disease ED visits by annual estimates of the 5–18 year old population for each year in the study period. Annual ED Rates were then averaged over the study period of each city. ED visit rates for each ZCTA are represented by the "+" symbol and shown in a by percentage (%) of the adult population (≥25 years old) with less than a 12th grade education (% < 12th grade), in b by % of households living below the federal poverty line (% below poverty), and in c by the Neighborhood Deprivation Index (NDI). The solid black line represents local polynomial regression using weighted least squares to fit a line through the data. The dotted gray lines represent the 1st, 2nd, and 3rd quartile values of each SES indicator. In each panel and city, neighborhood SES decreases from left to right. Abbreviations: ED, Emergency Department; NDI, Neighborhood Deprivation Index; RDAS, respiratory disease ED visits; SES, socioeconomic status; ZCTA, ZIP Code Tabulation Area Another potential factor influencing observed associations is complex spatial patterning of respiratory disease risk and socioeconomic status. Our modeling approach enabled us to qualitatively assess similarities and differences in spatial patterning of ozone-health associations across cities by transferring estimated ZCTA-specific ORs onto a spatial canvas to visualize locations of low- and high-risk areas. Findings from this qualitative assessment show that spatial influences are apparent in each city. The observed clustering of health risk and spatial patterning unique to each city may partially account for the patterns of effect modification observed. Future studies can use a similar mapping approach with cluster analysis to assess the degree to which urban development and socio-demographic clustering influence air pollution-health effects. In our study, inclusion of neighborhood SES in models did not explain heterogeneity in ozone-related pediatric respiratory disease across ZCTAs. There are several limitations that could have contributed to this observation. First, by assessing neighborhood SES effects at the ZCTA level, we assumed that ZCTA boundaries are relevant socioeconomic environments with regards to air pollution vulnerability. However, previous studies using similar methods have only assessed city or county-level effects; [34–38] given that neighborhood SES often varies over smaller spatial scales than counties, our approach, which assessed neighborhood effects at the ZCTA-level, is an improvement over these previous studies. Second, we used neighborhood SES values that were averaged across the study periods to evaluate effect modification of ozone-health associations. While these averages accounted for any shifts in socioeconomic composition that may have occurred during the respective study periods of our three cities, use of these averages in epidemiologic analyses assumed that the SES of all ZCTAs were constant. Due to Dallas' relatively short study period, we expect this type of exposure misclassification to be less of an issue for Dallas than Atlanta or St. Louis. Third, in our case-crossover models (Stage 1 analyses) we did not include control for other pollutants known to influence respiratory outcomes (e.g. nitrogen dioxide and fine particulate matter). Therefore, our estimated ORs for ozone could include some effects of correlated pollutants. Our decision to only examine health associations with exposure to ozone was based on the fact that ozone is a spatially homogeneous pollutant. In the multi-city context, we were concerned that exposure measurement error might differ in each city due to spatial variation of pollutants within cities. By examining only associations with ozone, we hoped to minimize the effect of such differential exposure measurement error. Nevertheless, we recognize that the associations between ozone and other pollutants could also differ across cities. Finally, although we have large numbers of daily ED visits within each city, power to detect effect modification by socioeconomic factors may have been limited. It is well established that ozone is a potent oxidizer and highly toxic to the epithelial cells of the entire respiratory tract. In toxicological studies, acute exposures to ozone induce transient physiological and biochemical changes while chronic exposures lead to cumulative damage or permanent decreases in airway function [52]. Continued efforts to better identify individual- and population-level vulnerabilities, while producing generalizable findings, are imperative. Our findings suggest that neighborhood-level SES is a factor contributing short-term vulnerability to ozone-related pediatric respiratory morbidity in Atlanta, Dallas, and St. Louis. While nuanced relationships between neighborhood SES and ozone-respiratory health were observed in each city, overall findings were largely generalizable. Synthesizing our results from combined meta-regressions and taking into account the high baseline risk in low SES populations (Fig. 5), we conclude that children living in low SES environments in Atlanta, Dallas, and St. Louis suffer from a higher burden of respiratory disease due to ozone compared to their counterparts living in wealthier SES neighborhoods. CMAQ: Community Multi-scale Air Quality DOW: ED: ICD-9: International Classification of Diseases, 9th Revision NDI: Neighborhood Deprivation Index Posterior interval Socioeconomic status ZCTA: ZIP Code Tabulation Area U.S. Environmental Protection Agency. Final Report: Integrated Science Assessment of Ozone and Related Photochemical Oxidants. Washington, DC: U.S. Environmental Protection Agency; 2013. publication no. EPA/600/R-10/076 F) (U.S. Environmental Protection Agency). Vinikoor-Imler LC, Owens EO, Nichols JL, Ross M, Brown JS, Sacks JD. Evaluating potential response-modifying factors for associations between ozone and health outcomes: a weight-of-evidence approach. Environ Health Perspect. 2014;122(11):1166–76. Bell ML, Zanobetti A, Dominici F. Who is more affected by ozone pollution? a systematic review and meta-analysis. Am J Epidemiol. 2014;180(1):15–28. Bateson TF, Schwartz J. Children's response to Air pollutants. J Toxicol Environ Health A. 2007;71(3):238–43. Makri A, Stilianakis NI. Vulnerability to air pollution health effects. Int J Hyg Environ Health. 2008;211(3–4):326–36. Klepeis NE, Nelson WC, Ott WR, Robinson JP, Tsang AM, Switzer P, Behar JV, Hern SC, Engelmann WH. The National Human Activity Pattern Survey (NHAPS): a resource for assessing exposure to environmental pollutants. J Expo Anal Environ Epidemiol. 2001;11(3):231–52. Adler NE, Newman K. Socioeconomic disparities in health: pathways and policies. Health Aff. 2002;21(2):60–76. Bernard P, Charafeddine R, Frohlich KL, Daniel M, Kestens Y, Potvin L. Health inequalities and place: a theoretical conception of neighbourhood. Soc Sci Med. 2007;65(9):1839–52. Krieger N, Williams DR, Moss NE. Measuring social class in US public health research: concepts, methodologies, and guidelines. Annu Rev Public Health. 1997;18:341–78. Burra TA, Moineddin R, Agha MM, Glazier RH. Social disadvantage, air pollution, and asthma physician visits in Toronto, Canada. Environ Res. 2009;109(5):567–74. Delfino RJ, Chang J, Wu J, Ren C, Tjoa T, Nickerson B, Cooper D, Gillen DL. Repeated hospital encounters for asthma in children and exposure to traffic-related air pollution near the home. Ann Allergy Asthma Immunol. 2009;102(2):138–44. Laurent O, Pedrono G, Segala C, Filleul L, Havard S, Deguen S, Schillinger C, Riviere E, Bard D. Air pollution, asthma attacks, and socioeconomic deprivation: a small-area case-crossover study. Am J Epidemiol. 2008;168(1):58–65. Lin M, Chen Y, Villeneuve PJ, Burnett RT, Lemyre L, Hertzman C, McGrail KM, Krewski D. Gaseous air pollutants and asthma hospitalization of children with low household income in Vancouver, British Columbia, Canada. Am J Epidemiol. 2004;159(3):294–303. Sacks JD, Rappold AG, Davis Jr JA, Richardson DB, Waller AE, Luben TJ. Influence of urbanicity and county characteristics on the association between ozone and asthma emergency department visits in North Carolina. Environ Health Perspect. 2014;122(5):506–12. Yap P-S, Gilbreath S, Garcia C, Jareen N, Goodrich B. The influence of socioeconomic markers on the association between fine particulate matter and hospital admissions for respiratory conditions among children. Am J Public Health. 2013;103(4):695–702. Laurent O, Pedrono G, Filleul L, Segala C, Lefranc A, Schillinger C, Riviere E, Bard D. Influence of socioeconomic deprivation on the relation between Air pollution and beta-agonist sales for asthma. Chest. 2009;135(3):717–23. Norris G, YoungPong SN, Koenig JQ, Larson TV, Sheppard L, Stout JW. An association between fine particles and asthma emergency department visits for children in Seattle. Environ Health Perspect. 1999;107(6):489–93. Winquist A, Klein M, Tolbert P, Flanders WD, Hess J, Sarnat SE. Comparison of emergency department and hospital admissions data for air pollution time-series studies. Environ Health. 2012;11:70. Yang Q, Chen Y, Shi Y, Burnett RT, McGrail KM, Krewski D. Association between ozone and respiratory admissions among children and the elderly in Vancouver, Canada. Inhal Toxicol. 2003;15(13):1297–308. Kim SY, O'Neill MS, Lee JT, Cho Y, Kim J, Kim H. Air pollution, socioeconomic position, and emergency hospital visits for asthma in Seoul, Korea. Int Arch Occup Environ Health. 2007;80(8):701–10. Lee JT, Son JY, Kim H, Kim SY. Effect of air pollution on asthma-related hospital admissions for children by socioeconomic status associated with area of residence. Arch Environ Occup Health. 2006;61(3):123–30. Neidell MJ. Air pollution, health, and socio-economic status: the effect of outdoor air quality on childhood asthma. J Health Econ. 2004;23(6):1209–36. Sarnat SE, Sarnat JA, Mulholland J, Isakov V, Ozkaynak H, Chang HH, Klein M, Tolbert PE. Application of alternative spatiotemporal metrics of ambient air pollution exposure in a time-series epidemiological study in Atlanta. J Expo Sci Environ Epidemiol. 2013;23(6):593–605. Shmool JL, Kubzansky LD, Newman OD, Spengler J, Shepard P, Clougherty JE. Social stressors and air pollution across New York City communities: a spatial approach for assessing correlations among multiple exposures. Environ Health. 2014;13:91. Wilhelm M, Qian L, Ritz B. Outdoor air pollution, family and neighborhood environment, and asthma in LA FANS children. Health Place. 2009;15(1):25–36. Lin S, Bell EM, Liu W, Walker RJ, Kim NK, Hwang SA. Ambient ozone concentration and hospital admissions due to childhood respiratory diseases in New York State, 1991–2001. Environ Res 2008 O'Lenick CR, Winquist A, Mulholland JA, Friberg MD, Chang HH, Kramer MR, Darrow LA, Sarnat SE. Assessment of neighbourhood-level socioeconomic status as a modifier of air pollution–asthma associations among children in Atlanta. J Epidemiol Community Health. 2017;71(2):129–36. Strickland MJ, Darrow LA, Klein M, Flanders WD, Sarnat JA, Waller LA, Sarnat SE, Mulholland JA, Tolbert PE. Short-term associations between ambient air pollutants and pediatric asthma emergency department visits. Am J Respir Crit Care Med. 2010;182(3):307–16. Tolbert PE, Mulholland JA, MacIntosh DL, Xu F, Daniels D, Devine OJ, Carlin BP, Klein M, Dorley J, Butler AJ, et al. Air quality and pediatric emergency room visits for asthma in Atlanta, Georgia. USA Am J Epidemiol. 2000;151(8):798–810. Strickland MJ, Klein M, Flanders WD, Chang HH, Mulholland JA, Tolbert PE, Darrow LA. Modification of the effect of ambient air pollution on pediatric asthma emergency visits: susceptible subpopulations. Epidemiology. 2014;25(6):843–50. Peel JL, Tolbert PE, Klein M, Metzger KB, Flanders WD, Todd K, Mulholland JA, Ryan PB, Frumkin H. Ambient air pollution and respiratory emergency department visits. Epidemiology. 2005;16(2):164–74. Darrow LA, Klein M, Flanders WD, Mulholland JA, Tolbert PE, Strickland MJ. Air pollution and acute respiratory infections among children 0–4 years of Age: an 18-year time-series study. Am J Epidemiol. 2014;180(10):968–77. Alhanti BA, Chang HH, Winquist A, Mulholland JA, Darrow LA, Sarnat SE. Ambient air pollution and emergency department visits for asthma: a multi-city assessment of effect modification by age. J Expo Sci Environ Epidemiol. 2016;26(2):180–8. Bell ML, Dominici F. Effect modification by community characteristics on the short-term effects of ozone exposure and mortality in 98 US communities. Am J Epidemiol. 2008;167(8):986–97. Dominici F, Samet JM, Zeger SL. Combining evidence on air pollution and daily mortality from the 20 largest US cities: a hierarchical modelling strategy. J R Stat Soc A Stat Soc. 2000;163(3):263–302. Levy JI, Diez D, Dou Y, Barr CD, Dominici F. A meta-analysis and multisite time-series analysis of the differential toxicity of major fine particulate matter constituents. Am J Epidemiol. 2012;175(11):1091–9. Chen R, Kan H, Chen B, Huang W, Bai Z, Song G, Pan G, Group CC. Association of particulate air pollution with daily mortality: the China Air Pollution and Health Effects Study. Am J Epidemiol. 2012;175(11):1173–81. Peng RD, Bell ML, Geyh AS, McDermott A, Zeger SL, Samet JM, Dominici F. Emergency admissions for cardiovascular and respiratory diseases and the chemical composition of fine particle air pollution. Environ Health Perspect. 2009;117(6):957–63. Sarnat SE, Winquist A, Schauer JJ, Turner JR, Sarnat JA. Fine Particulate Matter Components and Emergency Department Visits for Cardiovascular and Respiratory Diseases in the St. Louis, Missouri-Illinois, Metropolitan Area. Environ Health Perspect. 2015;123:437–44. Winquist A, Kirrane E, Klein M, Strickland M, Darrow LA, Sarnat SE, Gass K, Mulholland J, Russell A, Tolbert P. Joint effects of ambient Air pollutants on pediatric asthma emergency department visits in Atlanta, 1998–2004. Epidemiology. 2014;25(5):666–73. Messer LC, Laraia BA, Kaufman JS, Eyster J, Holzman C, Culhane J, Elo I, Burke JG, O'Campo P. The development of a standardized neighborhood deprivation index. J Urban Health. 2006;83(6):1041–62. Friberg MD, Zhai X, Holmes HA, Chang HH, Strickland MJ, Sarnat SE, Tolbert PE, Russell AG, Mulholland JA. Method for fusing observational data and chemical transport model simulations to estimate spatiotemporally resolved ambient Air pollution. Environ Sci Technol. 2016;50(7):3695–705. Strickland MJ, Darrow LA, Mulholland JA, Klein M, Flanders WD, Winquist A, Tolbert PE. Implications of different approaches for characterizing ambient air pollutant concentrations within the urban airshed for time-series studies and health benefits analyses. Environ Health. 2011;10:36. Everson PJ, Morris CN. Inference for multivariate normal hierarchical models. J R Stat Soc Ser B (Stat Methodol). 2000;62(2):399–412. Peng RD, Chang HH, Bell ML, et al. Coarse particulate matter air pollution and hospital admissions for cardiovascular and respiratory diseases among medicare patients. JAMA. 2008;299(18):2172–9. Dominici F, Peng RD, Bell ML, Pham L, McDermott A, Zeger SL, Samet JM. Fine particulate air pollution and hospital admission for cardiovascular and respiratory diseases. JAMA 2006; 295(10):popost1127-1134. Villeneuve PJ, Chen L, Rowe BH, Coates F. Outdoor air pollution and emergency department visits for asthma among children and adults: a case-crossover study in northern Alberta, Canada. Environ Health. 2007;6:40. Choi M, Curriero FC, Johantgen M, Mills ME, Sattler B, Lipscomb J. Association between ozone and emergency department visits: an ecological study. Int J Environ Health Res. 2011;21(3):201–21. Sarnat SE, Klein M, Sarnat JA, Flanders WD, Waller LA, Mulholland JA, Russell AG, Tolbert PE. An examination of exposure measurement error from air pollutant spatial variability in time-series studies. J Expo Sci Environ Epidemiol. 2010;20(2):135–46. Rothman KJ. CAUSES. Am J Epidemiol. 1976;104(6):587–92. Bell ML, Zanobetti A, Dominici F. Evidence on vulnerability and susceptibility to health risks associated with short-term exposure to particulate matter: a systematic review and meta-analysis. Am J Epidemiol. 2013;178(6):865–76. Curtis D. Klaassen, Louis J. Casarett, Doull J. Casarett and Doull's toxicology : the basic science of poisons, 8th edn: New York : McGraw-Hill Education, c2013. Gasparrini A, Guo Y, Hashizume M, Lavigne E, Zanobetti A, Schwartz J, Tobias A, Tong S, Rocklöv J, Forsberg B, et al. Mortality risk attributable to high and low ambient temperature: a multicountry observational study. Lancet. 2015;386(9991):369–75. Gasparrini A. Supplementary data and R-code for "Mortality risk attributable to high and low ambient temperature: a multicountry observational study". 2015. [https://github.com/gasparrini/2015_gasparrini_Lancet_Rcodedata]. Accessed 3 Feb 2016. The authors would like to acknowledge the contributions of members of the Southeastern Center for Air Pollution and Epidemiology (SCAPE) research group for their thoughtful feedback on data analysis approaches and results interpretation. This publication is based in part upon information obtained through the Dallas Fort Worth Hospital Council Foundation Information and Quality Services Center's collaborative hospital data initiative, the Georgia Hospital Association, and the Missouri Hospital Association. We are grateful for the support of all participating hospitals. This work was supported by a Clean Air Research Center grant to Emory University and the Georgia Institute of Technology from the US Environmental Protection Agency (USEPA; Grant, RD834799). This publication was also made possible by grants to Emory University from the USEPA (Grant R82921301), the National Institute of Environmental Health Sciences (Grant R01ES11294), and the Electric Power Research Institute (Grants EP-P27723/C13172, EP-P4353/C2124, EP-P34975/C15892, EP-P45572/C19698, and EP-P25912/C12525). The content of this publication is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the USEPA. Further, USEPA does not endorse the purchase of any commercial products or services mentioned in the publication. C.R.O., H.H.C., A.W., J.A.M., M.R.K., and S.E.S designed the study and directed its implementation. H.H.C. guided the statistical methodology. J.A.M. and M.D.F. provided ozone exposure data, analytical design and modeling assistance. C.R.O., H.H.C., A.W., M.R.K., and S.E.S analyzed the data. C.R.O., H.H.C., A.W., J.A.M., M.R.K., and S.E.S interpreted the results. C.R.O., H.H.C., A.W., J.A.M., M.D.F., M.R.K., and S.E.S wrote the manuscript. All authors read and approved the final manuscript. Data used in this study include data on emergency department visits, air pollution concentrations, and socioeconomic data at the ZIP code level in Atlanta, Dallas, and St. Louis. Data use agreements with participating hospitals and state hospital associations prevent sharing of the emergency department visit data outside the research team. Air pollution data were generated by the Georgia Institute of Technology research team using a fusion of publicly available air monitoring data and modeled air pollution estimates. A publication describing the method used to generate the data fusion outputs is available at http://pubs.acs.org/doi/abs/10.1021/acs.est.5b05134; however, the outputs are not currently publicly available. Finally, we used socioeconomic data from Census 2000 and the American Community Survey (2007–2011), which are already publicly available through various forums. Ethics approval was obtained by the Emory University Institutional Review Board (IRB: IRB00046509). The Emory University Institutional Review Board also granted exemption from informed consent requirements given the minimal risk nature of the study and the infeasibility of obtaining informed consent from individual patients for the large number of ED visit records examined in this study. Department of Environmental Health, Rollins School of Public Health, Emory University, Second Floor, Claudia Nance Rollins Building, Rm. 2030 B, 1518 Clifton Road NE, Atlanta, GA, 30322, USA Cassandra R. O' Lenick, Andrea Winquist & Stefanie Ebelt Sarnat Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, USA Howard H. Chang Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA, USA Michael R. Kramer School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA, USA James A. Mulholland & Mariel D. Friberg Cassandra R. O' Lenick Andrea Winquist James A. Mulholland Mariel D. Friberg Stefanie Ebelt Sarnat Correspondence to Cassandra R. O' Lenick. This article was updated to add an example data set and R code and supplementary files 2 & 3. An erratum to this article is available at http://dx.doi.org/10.1186/s12940-017-0275-8. Distribution and summary statistics for indicators of neighborhood SES in each city. Table S1. Number of ED visits and ZCTAs in high and low SES neighborhoods. Figure S2. Effect modification of ozone-respiratory disease by continuous values of neighborhood SES in city-specific analyses. (PDF 1175 kb) Offers example R code that requires the use of the simulated data in Additional file 3 (log odds ratios, standard errors, and area-level indicators of poverty) to demonstrate how to fit combined Bayesian hierarchical meta-regressions similar to the ones specified in Stage 2 models in O'Lenick et al., 2017.The R code also demonstrates how to make graphs similar to the ones presented in Figure 3 of O'lenick et al., 2017. (XLSX 49.5 kb) A simulated dataset that includes variables (log odds ratios, standard errors, and area-level indicators of poverty) for use in the R code in Additional file 2. The variables and the data in this example dataset are simulated, but are representative (similar in magnitude and structure) to outputs from the ZIP Code Tabluation Area (ZCTA)-specific case-crossover models described in Stage 1 models in O'Lenick et al., 2017. (TXT 5.69 kb) O' Lenick, C.R., Chang, H.H., Kramer, M.R. et al. Ozone and childhood respiratory disease in three US cities: evaluation of effect measure modification by neighborhood socioeconomic status using a Bayesian hierarchical approach. Environ Health 16, 36 (2017). https://doi.org/10.1186/s12940-017-0244-2
CommonCrawl
A New Method to Calculate Water Film Stiffness and Damping for Water Lubricated Bearing with Multiple Axial Grooves | springerprofessional.de Skip to main content PDF-Version jetzt herunterladen vorheriger Artikel Obstacle Avoidance and Multitarget Tracking of ... nächster Artikel Blade Segment with a 3D Lattice of Diamond Grit... PatentFit aktivieren 01.12.2020 | Original Article | Ausgabe 1/2020 Open Access A New Method to Calculate Water Film Stiffness and Damping for Water Lubricated Bearing with Multiple Axial Grooves Chinese Journal of Mechanical Engineering > Ausgabe 1/2020 Guojun Ren » Zur Zusammenfassung PDF-Version jetzt herunterladen \(A_{L} (\eta ),\;A_{E} (\eta )\;or\;A_{P} (\eta )\;\) Non-dimensional location of static load center \(A_{Ld} (\eta ),\;A_{Ed} (\eta )\;or\;A_{Pd} (\eta )\;\) Non-dimensional location of dynamic load center Width of the slide bearing and the pad between two grooves (m) Radial bearing clearance (m) Shaft Diameter (m) \(e\) Eccentricity of the bearing (m) \(h_{L0}\) Lubrication film thickness at leading edge of the slide bearing under steady operation (m) \(h_{T0}\) Lubrication film thickness at trailing edge of the slide bearing under steady operation (m) \(\Delta h_{Ti} (t)\) Amount of dynamic squeeze of fluid film at trailing edge of pad "i" (m) \(\Delta {\kern 1pt} \dot{h}_{Ti} (t)\) Dynamic squeeze velocity of fluid film at trailing edge of pad "i" (m/s) Bearing length, this is the length of bearing pad (m) \(x^{ * }\) Non-dimensional coordinate of sliding bearings, \(x^{ * } = x/B\) number of bearing grooves, always designed with even number without loss of generality \(N_{s}\) Shaft rotating speed in (r/s) Surface velocity of shaft (m/s) \(W_{o}\) Loading force of the slide bearing under steady operation condition (N) \(W_{o,i}\) \(W_{o}\) (Referring to pad "i") \(W_{1}\) Dynamic part of bearing load on top of \(W_{o}\) (N) \(\alpha_{Li}\) Location angle of leading edge of pad "i" \(\alpha_{Ti}\) Location angle of trailing edge of pad "i" Eccentricity ratio of the entire bearing (ε = e/c) Attitude angle of the bearing (rad) \(\eta = h_{L0} /h_{T0}\) Ratio of film thickness at leading edge to trailing edge) \(\eta_{i}\) \(\eta\) (Referring to pad "i") Part of first pad surface taking load (0 to 1.0) Viscosity of lubricant, for water it is a constant (P·s) Angular velocity of shaft (1/s) \(K_{yy} ,K_{yx} ,K_{xy} ,\;K_{xx}\) Non-dimensional coefficients of stiffness Dimensional coefficients of stiffness \(C_{yy} ,C_{yx} ,C_{xy} ,\;C_{xx}\) Non-dimensional coefficients of damping Dimensional coefficients of damping As we all known, water-lubricated guide bearings for hydro turbines and pumps are conventionally designed with multi-axial grooves. These grooves are provided for purpose of effectively cooling the bearing and flushing away abrasives. However, due to the variety of groove design in terms of its number and size, a prediction of bearing performance in terms of load capacity, stiffness and damping characteristics is very difficult. The author of this paper [ 1 ] introduced an analytical method to investigate groove effect on the Sommerfeld Number and coefficients of stiffness and damping based on inclined slide bearing solutions for bearings with rigid surface. However, the quality and accuracy of the solution depends on how close the geometry of the inclined slide bearing to represent the actual wedge shape at individual pads of the grooved bearing, especially the wedge shape of those pads which are loaded most. This paper examined three different geometric shapes of inclined slide bearings and provided a solution that would have satisfactory accuracy. A brief review of available literature is useful to understand the development of this method. For rigid surface plain bearings with no grooves, in terms of steady operation, besides the classic solution of long bearing theory by Sommerfeld [ 2 ] and short bearing theory by DuBois and Ocvirk [ 3 ], there are several excellent analytical solutions for finite length bearings. The finite length bearing theory by Childs et al. [ 4 , 5 ] is one of the excellent solutions. Another good analytical solution is proposed by Capone et al. [ 6 ]. Numerical solutions with using finite difference and finite element methods are abundant. The evaluation of them is not the focus of this paper. For bearings designed with multi-axial grooves, Pai et al. [ 7 ] published a number of works on steady performance and dynamic stability of simple rotor. Ren [ 8 ] published a paper on calculation of water film thickness of water lubricated bearing with multi-axial grooves for steady state operation. On stiffness and damping coefficients of non-grooved plain bearings, classical short bearing solution is the most popular one. The solution by Childs et al. [ 4 , 5 ] is strongly recommended for finite length bearings. It needs to mention that above works are related to rigid surface bearings. For deformable surface bearings, the effect of surface deformation is considered. Lahmar et al. [ 9 ] provides a procedure to simultaneously evaluate both static and dynamic performance with small perturbation method. Recent development was focused on CFD and FSI (fluid structure interaction) [ 10 – 17 ]. The effect of turbulence on stiffness and damping was investigated in Refs. [ 18 , 19 ]. In review on available information, methods to determine stiffness and damping are mainly relying on numerical simulation for bearings with multiple axial grooves. The objective of this paper was to provide a semi-analytical method to investigate the groove effect on load capacity, stiffness and damping based on infinite length and rigid surface which is approximately valid for bearings made of hard polymers under relatively low bearing pressure [ 10 ]. For the calculation results to be useful, certain conditions for bearings have to be applied. First of all, the ratio of bearing length to the width of bearing pad must be greater than 3.0 or higher. Secondly, the bearing pressure shall be relatively low so that the surface deformation effect doesn't overwhelmingly change the result. In spite of this paper doesn't include the effect of elastic deformation of bearing surface in terms of elastohydrodynamic lubrication, the results are valid approximately for polymer bearings with higher hardness and lower pressure that most pump and turbine guide bearings are the case. The result is not suitable to water lubricated bearings with rubber staves that needs special treatment either through experiment [ 20 – 29 ] or numerical analysis. Experimental study on bearings with multiple axial grooves demonstrates that a relatively rigid surface of bearing pad more easily forms hydrodynamic pressure than soft surface [ 22 ]. This has been demonstrated in the elastohydrodynamic study on sliding bearings [ 30 ]. A practice engineering application of stiffness and damping was shown in Ref. [ 31 ]. 2 Stiffness and Damping of Inclined Slide Bearings with Different Geometries The idea to evaluate the load capacity (Sommerfeld Number), the stiffness and damping coefficients of a circular bearing with multiple axial grooves is that the circular journal bearing can be considered as an assembly of many simple inclined slide bearings (Figure 1), so that the analytical results for an inclined slide bearing (Figure 2) can be used as building blocks to form a calculation method. Without loss of generality, Figure 1 demonstrates pressure created by bearing pad only. In reality, water pressure in grooves varies from negligible to significant. Since this procedure works with non-dimensional functions, the water pressure in grooves can be easily added back to dimensional pressure. The implementation of this idea starts from evaluating the dynamic characteristics of the sliding bearing shown in Figure 2. To avoid of confusion, an inclined slider is also called inclined slide bearing. A sliding pad or bearing pads are always referring to the bearing surface between two neighboring grooves. Grooved bearing as an assembly of sliding pads Infinite length inclined linear slide bearing As indicated in the introduction, the load capacity of the entire bearing depends on the load capacity of the individual inclined sliding pads. The accuracy of the solution is a direct function of how the geometry of the inclined sliding bearing to represent the wedge shape of the individual pads of the circular bearing. In following sections, three useful geometries of inclined slide bearing are examined, the proximity to the wedge shape of the main bearing is compared. 2.1 Linear Inclined Slide Bearing The linear inclined slider was used in previous work [ 1 ]. The water film is presented with a linear function as follows: $$h(x^{ * } ) = h_{To} \cdot \left[ {1 - x^{ * } \cdot (\eta - 1)} \right].$$ The main functions of the solution are: Load capacity (non-dimensional force) $$\varPi_{L} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{6 \cdot \left[ {\left( {\eta + 1} \right) \cdot \ln \eta - 2 \cdot \left( {\eta - 1} \right)} \right]}}{{\left( {\eta - 1} \right)^{2} \cdot \left( {\eta + 1} \right)}}$$ Location of static load center (non-dimensional distance) $$A_{L} (\eta ) = \frac{{x_{C} }}{B} = \frac{{\eta \cdot \left( {\frac{\eta + 2}{\eta - 1}} \right) \cdot \ln \eta - \frac{5}{2} \cdot \left( {\eta - 1} \right) - 3}}{{\left( {\eta + 1} \right) \cdot \ln \eta - 2\left( {\eta - 1} \right)}}$$ Stiffness function (non-dimensional) $$K_{L} (\eta ) = 6 \cdot \frac{2 \cdot \eta \cdot \ln \eta - \eta + 1}{{\eta \cdot (\eta - 1)^{2} }} - \frac{6}{\eta \cdot (\eta + 1)} + \frac{12}{{1 - \eta^{2} }}$$ Damping function (non-dimensional) $$C_{L} (\eta ) = - 6 \cdot \frac{\eta \cdot \ln \eta - \eta + 1}{{(\eta - 1)^{3} }} + 6\frac{\eta \cdot \ln \eta }{{(\eta^{2} - 1) \cdot (\eta - 1)}}.$$ It is noticed that the right side of Eqs. ( 2)‒( 5) is a function of the ratio of film thickness at leading edge to the film thickness at trailing edge only. For the purpose of evaluating stiffness and damping, the film thickness at leading and trailing edges is considered as a function of time. This is shown in Figure 3. Inclined linear slide bearing in dynamic motion The non-dimensional dynamic load is expressed by stiffness and damping function: $$\frac{{W_{1} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = - \left[ {K_{L} (\eta ) + i \cdot C_{L} (\eta )} \right].$$ According to Ref. [ 1 ], the final load on the linear inclined slide bearing is expressed with $$\begin{aligned} W = W_{0} - \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{h_{T0}^{3} }} \cdot K_{L} (\eta ) \cdot \Delta h_{T} (t) \hfill \\ \quad \quad \quad - \frac{{\mu \cdot B^{3} \cdot L}}{{h_{T0}^{3} }} \cdot C_{L} (\eta ) \cdot \Delta \dot{h}_{T} (t). \hfill \\ \end{aligned}$$ Eq. ( 7) is the force of the individual slider under dynamic motion. It is the fundamental relationship between the slider force and slider displacement and squeeze velocity. As long as the four functions expressed from Eqs. ( 2)‒( 5) are known, the load capacity of the inclined slide bearing is fully defined. Therefore, the whole subject turns to finding a set of functions expressed in Eqs. ( 2)‒( 5) for different geometric shape of the slider. 2.2 Exponential Gap Slide Bearing The shape of exponential gap slide bearing is expressed with following equation: $$h(x^{ * } ) = h_{To} \cdot \exp ( - x^{ * } \cdot \ln \eta ).$$ The four functions of result are as follows: Non-dimensional load capacity $$\varPi_{E} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{\eta^{2} - 1}}{{2\eta^{2} \cdot \left( {\ln \eta } \right)^{2} }} - \frac{3}{{(\eta^{2} + \eta + 1) \cdot \ln \eta }},$$ $$A_{E} (\eta ) = \frac{{x_{C} }}{B} = \frac{{(\eta^{2} + \eta + 3) \cdot \eta^{2} - \frac{{5(\eta + 1)(\eta^{3} - 1)}}{6\ln \eta } - 3\eta^{2} \ln \eta }}{{(\eta + 1)(\eta^{3} - 1) - 6\eta^{2} \cdot \ln \eta }},$$ $$K_{E} (\eta ) = \frac{{\eta^{2} - 1}}{{\eta^{2} \cdot (\ln \eta )^{2} }} - \frac{6}{{(\eta^{2} + \eta + 1) \cdot \ln \eta }},$$ $$C_{E} (\eta ) = \frac{{\eta^{2} - 1}}{{\eta^{2} (\ln \eta )^{3} }} - \frac{6}{{(\eta^{2} + \eta + 1) \cdot (\ln \eta )^{2} }}.$$ 2.3 Parabolic Gap Slide Bearing The author of this paper derived the four functions for a parabolic gap slider (see Appendix). The film thickness expression is presented by: $$h({x^*}) = {h_{To}} \cdot \left[ {1 + {x^{{*2}}} \cdot (\eta - 1)} \right].$$ The resulting functions are as follows. $$\varPi_{P} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{\sqrt {\eta - 1} + (\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\eta^{2} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3}) \cdot \sqrt {\eta - 1} }}.$$ $$A_{P} (\eta ) = \frac{{x_{C} }}{B} = 1 + \frac{{\int\limits_{ - 1}^{0} {x \cdot p_{o}^{ * } (x,\eta ){\text{d}}x} }}{{\varPi_{p} (\eta )}},$$ $$p_o^*({x^*},\eta ) = \frac{2}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }} \times \;\left[ {{{\tan }^{ - 1}}({x^*} \cdot \sqrt {\eta - 1} ) + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {\eta ^2} \cdot {x^*} \cdot {{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{(1 + (\eta - 1) \cdot {x^*}^2)}^2}}}} \right].$$ $$K_{P} (\eta ) = 2\frac{{\sqrt {\eta - 1} + (\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\eta^{2} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3}) \cdot \sqrt {\eta - 1} }}.$$ $$\begin{aligned} C_{P} (\eta ) & = \frac{2 \cdot (2\eta + 1)}{{3\eta + 2 + \frac{{3\eta^{2} }}{{\sqrt {\eta - 1} }} \cdot \tan^{ - 1} \sqrt {\eta - 1} }}\\ & \quad \times \;\left[ {\frac{1}{\eta } + \frac{{3 \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }}} \right] - \frac{4\eta - 1}{\eta \cdot (\eta - 1)} + \frac{{3 \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{(\eta - 1)^{{\frac{3}{2}}} }}. \hfill \\ \end{aligned}$$ Figure 4 shows the basic functions for these three inclined slide bearings. Since the non-dimensional load capacity function is exactly the half of the stiffness function for all type of geometries, it was not repeated in Figure 4. Functions of different basic slide bearings It is to notice that the stiffness function of the parabolic gap and exponential gap is higher than the linear gap for all leading to trailing edge film ratios. This implied if the parabolic function is in good agreement with the actual bearing clearance (pads), a circular bearing simulated with the parabolic function will have a higher load capacity. The dynamic load center is slightly different from static load center Figure 4(d). Appendix provides the definition of load center ratio R( η). This paper used static load center for Sommerfeld number evaluation and dynamic load center for stiffness and damping evaluation for all three types of sliding bearings. 3 Assembly Procedure To evaluate which type of slider geometry best suitable for building the circular bearing, the assembly procedure must be presented first. The first step of the assembly procedure is to define the location angles of each pad relative to a rotating co-ordinate frame r- ϕ [ 8 ]. Figure 5 shows a circular journal bearing with multiple axial grooves under a steady operational condition. By given load and shaft speed, the shaft center is offset from bearing center with an eccentricity of " e". The connecting line between bearing center and shaft center is in-line with r-axis of the rotating coordinate system r- ϕ. Assuming the bearing is fixed in position and the load is vertical as shown on the figure, the r- ϕ coordinate system has an attitude angle " Φ" with respect to the loading direction, namely, the y-axis. The attitude angle changes depending on load, shaft speed and groove numbers. Definition of pad location angles A second co-ordinate frame x- y is defined in line with load direction. In this system, y-axis is in load direction and x-axis is perpendicular to the load direction. In Figure 5, the r-axis divides the entire bearing into two equal halves. All pads underneath r-axis (in the sense of Figure 5) have convergent angles in shaft rotating direction and therefore are able to create hydrodynamic lifting forces. All pads above r-axis have divergent angles in shaft rotating direction and are therefore not able to create hydrodynamic lifting forces. In theory, the divergent bearing half could create a vacuum and therefore a negative pressure. However, since in practice, almost in all the cases, outside source of lubricant will be supplied to the bearing grooves, the divergent half will not create a negative pressure, but keep the same level of pressure as supplied lubricant as in grooves. It is therefore acceptable to assume the pressure on that half of bearing as zero. This assumption is corresponding to half Sommerfeld or Gumbel boundary condition. Since the bearing is assumed to be fixed, all angles ( \(\alpha_{Li}\), \(\alpha_{Ti}\) = 1, 2, 3, …, N/2) defining the positions of grooves will change with attitude angle which is an unknown parameter. One set of groove location angles only defines a particular equilibrium of steady operation. For calculation purpose, a set of "floating numbers" are assigned to the pads underneath the r-axis. As a rule, no matter how the attitude angle to change, it is always the first convergent pad underneath r-axis at the minimum water film location is assigned number "1". Other pads are enumerated clockwise with number 2, 3, 4,… in sequence. After having defined the pad location angles, the film thickness ratio of leading to trailing edge under steady operation condition is $$\eta_{i} = \frac{{1 + \varepsilon \cdot \cos \alpha_{Li} }}{{1 + \varepsilon \cdot \cos \alpha_{Ti} }},\quad i = { 1},{ 2}, \, \ldots ,N/ 2.$$ The film thickness at any leading and trailing edge is $$h_{L0,i} = c \cdot (1 + \varepsilon \cdot \cos \alpha_{Li} ),\quad i = { 1},{ 2}, \ldots ,N/ 2,$$ $$h_{T0,i} = c \cdot (1 + \varepsilon \cdot \cos \alpha_{Ti} )\quad i = { 1},{ 2}, \ldots ,N/ 2.$$ The second step of the assembly procedure is to calculate the force contribution of each pad to support the entire bearing load. Figure 6 illustrates the supporting force from one pad with location angles \(\alpha_{Li}\) and \(\alpha_{Ti}\). Considering Eqs. ( 19) and ( 20), the force by each individual pad can be calculated with using Eq. ( 7) obtained from previous section. All terms in Eq. ( 21) are referred to pad number " i": $$W_{i} = W_{0i} - \frac{{c_{1} \cdot K (\eta_{i} )}}{{ (1 + \varepsilon \cdot { \cos }\alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t )- \frac{{c_{2} \cdot C (\eta_{i} )}}{{ (1 + \varepsilon \cdot { \cos }\alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t ) ,$$ with \(c_{1} = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{3} }};\quad c_{2} = \frac{{\mu \cdot B^{3} \cdot L}}{{c^{3} }}\). Contribution of pad load to bearing In Figure 6, the pad load is considered to be in direction pointing to bearing center. The projection of bearing load to r- ϕ co-ordinate system is then $$\begin{aligned} &- W_{i,r} = W_{0i} \cos (\uppi - \varTheta_{i} ) \hfill \\ &- \left( {\frac{{c_{1} \cdot K(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t) + \frac{{c_{2} \cdot C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t )} \right) \cdot \cos (\uppi - \vartheta_{i} ), \hfill \\ \end{aligned}$$ $$\begin{aligned} &W_{i,\varphi } = W_{0i} \sin (\uppi - \varTheta_{i} ) \hfill \\ &- \left( {\frac{{c_{1} \cdot K(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t) + \frac{{c_{2} \cdot C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t )} \right) \cdot \sin (\uppi - \vartheta_{i} ), \hfill \\ \end{aligned}$$ In Eqs. ( 22), ( 23), the function \(K(\eta )\) is one of the function \(K_{L} (\eta )\), \(K_{E} (\eta )\)or \(K_{P} (\eta )\) depending on which one is chosen. The same applied to function \(C(\eta )\). For the entire circular bearing, the dynamic part of pad force is only caused by a very small change of bearing eccentricity Δ e and attitude angle Δ Φ, the film thickness at trailing edge can be correlated to these small eccentricity and attitude angle change as follows $$\Delta h_{Ti} = \cos \alpha_{Ti} \cdot \Delta e + \sin \alpha_{Ti} \cdot e \cdot \Delta \varPhi .$$ The same can be applied to the pad velocity. However, the velocity must refer to the entire pad, not just the trailing edge. This implies that the pad has no rotation about its load center at any instant of dynamic motion. Therefore the velocity becomes: $$\Delta \dot{h}_{Ti} = \cos (\uppi - \varTheta_{i} ) \cdot \Delta \dot{e} + \sin (\uppi - \vartheta_{i} ) \cdot e \cdot \Delta \dot{\varPhi }\text{,}$$ $$\theta_{i} = A_{d} (\eta_{i} ) \cdot \alpha_{Ti} + \left[ {1 - A_{d} (\eta_{i} )} \right] \cdot \alpha_{Li} ,\quad \varTheta_{i} = A(\eta_{i} ) \cdot \alpha_{Ti} + \left[ {1 - A(\eta_{i} )} \right] \cdot \alpha_{Li},\quad i = { 1},{ 2},{ 3}, \ldots ,N/ 2.$$ The function \(A(\eta )\) is one of the \(A_{L} (\eta ),\;A_{E} (\eta )\;or\;A_{P} (\eta )\;\) depending on which type of sliding bearing chosen and \(A_{d} (\eta )\) is one of the \(A_{Ld} (\eta ),\;A_{Ed} (\eta )\;{\text{or}}\;A_{Pd} (\eta )\;\) depending on which type of sliding bearing chosen. Inserting Eqs. ( 24) and ( 25) into Eqs. ( 22) and ( 23), their matrix form can be expressed as $$\begin{aligned} \left( {\begin{array}{*{20}c} { - W_{i,r} } \\ {W_{i,\phi } } \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} {W_{oi} \cdot \cos (\uppi - \varTheta_{i} )} \\ {W_{0i} \cdot \sin (\uppi - \varTheta_{i} )} \\ \end{array} } \right) - c_{1} \left[ {\begin{array}{*{20}c} {K_{rr,i} } & {K_{r\phi ,i} } \\ {K_{\phi r,i} } & {K_{\phi \phi ,i} } \\ \end{array} } \right]\left( {\begin{array}{*{20}c} {\Delta e} \\ {e \cdot \Delta \phi } \\ \end{array} } \right) \hfill \\ & \quad - c_{2} \left[ {\begin{array}{*{20}c} {C_{rr,i} } & {C_{r\phi ,i} } \\ {C_{\phi r,i} } & {C_{\phi \phi ,i} } \\ \end{array} } \right]\left( {\begin{array}{*{20}c} {\Delta \dot{e}} \\ {e\Delta \dot{\phi }} \\ \end{array} } \right). \hfill \\ \end{aligned}$$ The coefficients of stiffness and damping for the pad with index " i" are: $$K_{rr,i} = \frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \cos ({{\uppi }} - \vartheta_{i} ),$$ $$K_{r\phi ,i} = \frac{{\sin \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \cos ({{\uppi }} - \vartheta_{i} ),$$ $$K_{\phi r,i} = \frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \sin ({{\uppi }} - \vartheta_{i} ),$$ $$K_{\phi \phi ,i} = \frac{{\sin \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \sin ({{\uppi }} - \vartheta_{i} ),$$ $$C_{rr,i} = \frac{{C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \cos^{2} ({{\uppi }} - \vartheta_{i} ),$$ $${C_{r\phi ,i}} = {C_{\phi r,i}} = \frac{{C({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot \cos ({{\uppi }} - {\vartheta _i}) \cdot \sin ({{\uppi }} - {\vartheta _i}),$$ $$C_{\phi \phi ,i} = \frac{{C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \sin^{2} ({{\uppi }} - \vartheta_{i} ).$$ 4 Quality Comparison of Slide Bearing Geometry Above section proposed an idea of using an array of inclined slide bearings to build a circular journal bearing with multiple axial grooves. However, the quality of this approach depends on how closely the individual slider represents the shape of individual pad at any given location and eccentricity ratio. In true sense of grooved bearing, the non-dimensional form of water film thickness at any given location angle " α" (Figure 7) is $$\overline{h} = \frac{h}{c} = (1 + \varepsilon \cdot \cos \alpha ).$$ Definition of local coordinate When evaluating the individual inclined slide bearing, the non-dimensional water film thickness is expressed as a function of the film thickness ratio of leading edge to trailing edge as well as the local coordinate " s" (Figure 7). Therefore, the non-dimensional water film thickness for the pad number " i" in terms of above mentioned water film thickness ratio and local coordinate can be presented in following form: $$\bar{h}(\bar{s},i) = \left\{ {1 + \varepsilon \cdot \cos \left[ {\bar{s} \cdot \left( {\alpha_{Ti} - \cos^{ - 1} \frac{{(1 + \varepsilon \cdot \cos \alpha_{Ti} ) \cdot \eta_{i} - 1}}{\varepsilon }} \right) + \alpha_{Ti} } \right]} \right\},$$ Where \(\overline{s} = s/B\) is non-dimensional local coordinate. The true bearing may have grooves with chamfer or round fillet. Since any chamfer and fillet will be too large for water film formation. Therefore, chamfer and fillet must be considered as part of grooves, not surface of pads. All other expression of water film thickness for bearing pad " i" can also expressed with local coordinate and the water film thickness ratio at leading and trailing edge. For linear inclined slide bearing $$\bar{h}_{L} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \left[ {1 - \bar{s} \cdot (\eta_{i} - 1)} \right].$$ For exponential inclined slide bearing $$\bar{h}_{E} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \exp \left[ { - \bar{s} \cdot \ln \eta_{i} } \right].$$ For parabolic inclined slide bearing $$\bar{h}_{P} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \left[ {1 + \bar{s}^{2} \cdot (\eta_{i} - 1)} \right].$$ Eqs. ( 29)‒( 31) presents the water film thickness that are intended to be used to replace Eq. ( 28). The purpose of doing so is to simplify the problem by solving Reynold's Equation at the inclined slide bearing level rather than at the full bearing level. It is to notice that all these equations return the film thickness at trailing edge of pad " i" which is \((1 + \varepsilon \cdot \cos \alpha_{Ti} )\) when \(\overline{s} = 0\). By the same token, they return the film thickness at leading edge which is \((1 + \varepsilon \cdot \cos \alpha_{Ti} ) \cdot \eta_{i}\) when \(\overline{s} = - 1\). To evaluate which function from Eqs. ( 29)‒( 31) is the best approach to Eq. ( 28), one set of square root errors was defined. These are: $$S_{L} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{L} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }},$$ $$S_{E} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{E} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }},$$ $$S_{P} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{P} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }}.$$ Eqs. ( 32)‒( 34) were derived from non-dimensional water film thickness and are function of eccentricity ratio, number and distribution of grooves. They are valid for any size bearings with any number of grooves. In following, a 12-groove bearing is examined for the square root errors. This bearing will have six bearing pads taking load. Assuming the location of the minimum film thickness falls into the very center of a groove, so that the first pad named with one will have the entire pad being loaded. Figure 8 shows the result for the bearing with 12 grooves. It showed that the parabolic inclined slider has the least error for the first pad for eccentricity ratio from 0.9 to 0.999 which is the range of most interest for water lubricated guide bearings. The linear slider seems to be best suitable for second pad at high eccentricity ratio and rest of other pads. The exponential slider seems to be suitable for pads except for first one at lower eccentricity ratio. However, this is only observations from a bearing with 12 grooves. Investigation on different number of grooves showed that the error for exponential slider changes rapidly with increasing eccentricity ratio, meaning that for small eccentricity ratio, errors are small, for large eccentricity ratio, errors are big. This is especially true for first and second pad. The linear slide bearing is insensitive to eccentricity ratio. Therefore, in following evaluation, a scheme that parabolic slider to first pad and linear slider to rest of pads is applied. This paper investigated only three types of sliding bearings. It is certainly there must be other types of sliding bearings that would fit for the purpose. As demonstrated in Ref. [ 8 ], the first pad takes the most load of entire bearing. Even though the parabolic slider is only able to simulate the first loaded pad, it is still a significant improvement of load capacity in comparison to the approach with all linear sliders. Square root error of different slider 5 Steady Operation and Sommerfeld Number 5.1 Effect of Groove Number Ref. [ 8 ] provides a procedure to calculate the load capacity and water film under steady operational condition. The non-dimensional load capacity of bearing was determined as follows: $$W_{0r,i} = \frac{{\varPi (\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{2} }} \cdot \cos \left\{ {{{\uppi }} - \varTheta_{i} } \right\},$$ $$W_{0\varphi ,i} = \frac{{\varPi (\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{2} }} \cdot \sin \left\{ {{{\uppi }} - \varTheta_{i} } \right\},$$ $$\varPi (\eta_{i} ) = \left\{ {\begin{array}{*{20}l} {\varPi_{L} (\eta_{i} ),\quad {\text{for}}\;{\text{linear}}\;{\text{slider,}}} \hfill \\ {\varPi_{P} (\eta_{i} ),\quad {\text{for}}\;{\text{parabolic}}\;{\text{slider}} .} \hfill \\ \end{array} } \right.$$ The total non-dimensional supporting force contributed by all pads underneath r-axis is then the sum of all components above: $$W_{0\varphi } = \sum\limits_{i = 2}^{N/2} {W_{0\varphi ,i} } + \lambda^{2} \cdot W_{0\varphi ,1} ,$$ $$W_{0r} = \sum\limits_{i = 2}^{N/2} {W_{0r,i} } + \lambda^{2} \cdot W_{0r,1} ,$$ where λ is a number less than 1.0. In Figure 5, if the position of minimum film thickness is located within the pad number 1, only a part of this pad will take load. The number " λ" gives the percentage of the pad that takes load. Its value is unknown at beginning of a calculation. It depends on attitude angle " Φ", width of bearing pads " B", width of grooves as well as the relationship between bearing loading direction and the pad position to which the load pointing to. For vertical bearings, such as hydro turbine guide bearing and vertical pump bearing, loading direction is undefined. In this case, practical calculation can be done by assuming λ = 0.5 which presents a condition with least load capacity. For horizontal bearings, load direction can be easily defined. Parameter λ can be determined by an iteration. At beginning, first to assume an initial value, for example 0.5, then calculate the attitude angle according to Eq. ( 40) below and subsequently calculate the new λ-value. With new value run calculation again, get another attitude angle. Repeating the same procedure until a satisfactory result obtained. In evaluation on Figures 9, 10, 11 and 12, λ = 1.0 was used. A full analysis of effect of λ-value on Sommerfeld Number is worth of a full separate paper to discuss. Sommerfeld Number comparison Ratio of Sommerfeld Number Sommerfeld Number of grooved bearings Ratio of Sommerfeld Number of grooved to non-grooved bearing The resultant force in dimensional form is then $$W_{0} = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{2} }}\sqrt {W_{0\varphi }^{2} + W_{0r}^{2} } .$$ The attitude angle is: $$\varPhi = - \tan^{ - 1} \frac{{W_{0\phi } }}{{W_{0r} }}.$$ According to conventional definition of Sommerfeld Number for circular bearings, it is $$S = \frac{{\mu \cdot N_{s} \cdot d \cdot L}}{{W_{0} }} \cdot \left( {\frac{d}{2c}} \right)^{2} .$$ The Sommerfeld Number for grooved bearings can be derived with using Eq. ( 39): $$S = \frac{{d^{2} }}{{4 \cdot {{\uppi }} \cdot B^{2} \cdot \sqrt {W_{0\varphi }^{2} + W_{0r}^{2} } }}.$$ The Sommerfeld Number, defined by Eq. ( 42) is a function of number of grooves and eccentricity ratio. Its reciprocal defines the load capacity of a bearing while the reciprocal of Eq. ( 41) is the actual load to the bearing. It is evident that for low eccentricity ratio (less than 0.9), there is no significant difference between the modeling with all linear sliders and the one with mixed sliders, namely, the first one using parabolic slider and rest of them linear sliders. However, for high eccentricity ratio ( ε > 0.9), this reflects the case of high loading, the difference can be significant, Figure 9. The Sommerfeld Number simulated with all linear pads can be up to 5 times higher than the Sommerfeld Number with mixed pad geometry for the example investigated. By definition, higher Sommerfeld Number means lower load capacity. This is reflected in Figure 10. The Sommerfeld Number ratio shown in Figure 10 is the Sommerfeld Number with all linear sliders divided by the Sommerfeld Number with mixed pads in which the first pad is parabolic and rest of them linear. Water lubricated guide bearings can be subject to eccentricity ratio as high as 0.999. In this case, an all linear modeling definitely under estimate the bearing loading capacity. Based on the quality comparison of sliding bearing geometries in previous section, the parabolic gap is more closed to true shape of bearing clearance of the first pad. Therefore, the mixed scheme must more closely present the true bearing performance. It is an improvement of all linear slider modeling, especially for large eccentricity ratio. Ren et al. [ 1 ] in their previous paper quantitatively demonstrated that the load capacity of grooved bearing is lower than that of non-grooved bearings. The Sommerfeld Number of grooved bearing modeled with all linear pads was compared with the bench mark Sommerfeld Number, namely the formulation from Refs. [ 4 , 5 ]. A similar comparison is made here for the grooved bearing with mixed type of inclined sliders to the non-grooved bearings. The Sommerfeld Number of the solution by Childs is again used as bench mark for the comparison. The massive curve on Figure 11 is the Sommerfeld Number of non-grooved bearing. Other curves are for grooved bearing with different number of grooves. Definitely, the grooves reduce the load capacity of a bearing. The more grooves, the bigger is the load capacity reduction. The ratio of Sommerfeld Number for grooved to non-grooved bearing is shown in Figure 12. This provides a better visualization as how much the load capacity reduction can be expected. From Figure 12 is to see reducing the number of grooves is an effective way to increase load capacity of a grooved bearing. 5.2 Effect of Groove Size From the defining equation of Sommerfeld Number of grooved bearing Eq. ( 42), it is a function of ratio d/ B. For a fixed number of grooves, the size (width) of groove will take away a part of bearing surface which results in a narrower bearing pad. This increases the d/ B ratio. Therefore, for a real load capacity of a practice design, groove effect must be taken into consideration, especially grooves with round or fillet corners. 6 Stiffness and Damping Coefficients Following the similar procedure in Refs. [ 1 , 8 ], the non-dimensional stiffness and damping coefficients for a circular bearing with multi-axial grooves were obtained by summarizing the coefficients of stiffness and damping over all supporting pads and are expressed as $$\begin{aligned} K_{rr} = \sum\limits_{i = 2}^{N/2} {\frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K_{L} (\eta_{i} )} \cdot \cos (\uppi - \vartheta_{i} ) \hfill \\ \quad \quad \quad + \frac{{\lambda^{2} \cdot \cos \alpha_{T1} }}{{(1 + \varepsilon \cdot \cos \alpha_{T1} )^{3} }} \cdot K_{P} (\eta_{1} ) \cdot \cos (\uppi - \vartheta_{1} ), \hfill \\ \end{aligned}$$ $$\begin{aligned}{K_{r\phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{\sin {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \cos ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \sin {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \cos ({{\uppi }} - {\vartheta _1}), \end{aligned}$$ $$\begin{aligned}{K_{\phi r}}& = \sum\limits_{i = 2}^{N/2} {\frac{{\cos {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \cos {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{K_{\phi \phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{\sin {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \sin {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{rr}} & = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot {\cos ^2}({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {\cos ^2}({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{r\phi }} & = {C_{\phi r}} = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot \cos ({{\uppi }} - {\vartheta _i}) \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot \cos ({{\uppi }} - {\vartheta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{\phi \phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot {\sin ^2}({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {\sin ^2}({{\uppi }} - {\vartheta _1}).\end{aligned}$$ Translate them from r- ϕ coordinate frame into x- y coordinate frame, these are $$\left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos \varPhi } & { - \sin \varPhi } \\ {\sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {K_{rr} } & {K_{r\varphi } } \\ {K_{\varphi r} } & {K_{\varphi \iota } } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\cos \varPhi } & {\sin \varPhi } \\ { - \sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos \varPhi } & { - \sin \varPhi } \\ {\sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {C_{rr} } & {C_{r\varphi } } \\ {C_{\varphi r} } & {C_{\varphi \iota } } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\cos \varPhi } & {\sin \varPhi } \\ { - \sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right].$$ The coefficients of stiffness KYY, KYX, KXY and KXX are non-dimensional. From equation group Eqs. ( 43) and ( 44), it can be seen that they only depend on location angles and the number of grooves. This means they are changing with different groove configurations. The same is applied to the damping coefficients. For purpose to make comparison with other available methods, here a new group of non-dimensional coefficients of stiffness and damping is defined as follows: $$\left[ {\begin{array}{*{20}c} {K_{yy} } & {K_{yx} } \\ {K_{xy} } & {K_{xx} } \\ \end{array} } \right] = \left( {\frac{2B}{d}} \right)^{2} \cdot S \cdot\uppi \cdot \left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {C_{yy} } & {C_{yx} } \\ {C_{xy} } & {C_{xx} } \\ \end{array} } \right] = \left( {\frac{2B}{d}} \right)^{3} \cdot S \cdot\uppi \cdot \left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right].$$ The Sommerfeld Number in Eqs. ( 47) and ( 48) is the one defined by Eq. ( 42). The final dimensional coefficients of stiffness and damping are as follows: $$\left[ {\begin{array}{*{20}c} {k_{yy} } & {k_{yx} } \\ {k_{xy} } & {k_{xx} } \\ \end{array} } \right] = \frac{{W_{o} }}{c} \cdot \left[ {\begin{array}{*{20}c} {K_{yy} } & {K_{yx} } \\ {K_{xy} } & {K_{xx} } \\ \end{array} } \right] = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{3} }}\left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {c_{yy} } & {c_{yx} } \\ {c_{xy} } & {c_{xx} } \\ \end{array} } \right] = \frac{{W_{o} }}{c \cdot \varOmega } \cdot \left[ {\begin{array}{*{20}c} {C_{yy} } & {C_{yx} } \\ {C_{xy} } & {C_{xx} } \\ \end{array} } \right] = \frac{{\mu \cdot B^{3} \cdot L}}{{c^{3} }}\left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right].$$ The non-dimensional stiffness group in Eq. ( 47) and non-dimensional damping group in Eq. ( 48) is directly comparable with existing circular bearing results such as long and short bearing theory and others. In this paper, the stiffness and damping coefficients from Childs and Moes [ 4 , 5 ] are of particular interest. They are considered to be accurate for non-grooved plain bearings and used for verifying the correctness of Eqs. ( 47) and ( 48). Same as for non-dimensional coefficients of stiffness, the coefficients of damping have also been compared at same groove number and L/ D ratio. In Figures 13 and 14, the stiffness and damping coefficients by Childs and Moes was based on L/ D = 2.0 and for grooved bearings was based on groove number = 8. Stiffness coefficients comparison Damping coefficients comparison It is understandable that stiffness coefficient \({K_{yy} }\) for grooved bearing is slightly greater than that for non-grooved bearing. This is because the pressure is more concentrated on the area around of loading center due to grooves. The same reason may explain why the coefficient of stiffness \({K_{xx} }\) is lower than that of non-grooved bearing. The cross coefficient of stiffness \({K_{yx} }\) shows a different behavior from non-grooved bearing. Another noticeable characteristic is that the turning point of the cross-stiffness coefficient \({K_{xy} }\) into negative is shifted to lower eccentricity ratio. The damping coefficient \({C_{yy} }\) is almost identical for both non-grooved bearing and grooved bearing in this particular geometrical condition. The coefficient \({C_{xx} }\) of grooved bearing is lower than that of non-grooved bearing. The cross-damping coefficient \({C_{xy} } = {C_{yx} }\) has larger difference for low and high eccentricity ratio and small difference for intermediate eccentricity ratio. 7 Influence of the Number of Grooves As stated in previous section, the coefficients of stiffness and damping are not only a function of eccentricity ratio, but also the number of grooves. Figure 15 is a comparison between two bearings with 8 grooves and 12 grooves, respectively. The effect on the coefficients of stiffness is different. An increased groove number has an insignificant effect on \({K_{yy} }\) while it reduces \({K_{yx} }\). For cross-coefficient of stiffness \({K_{xy} }\), the increase of groove number shifts the turning point to negative to lower eccentricity ratio. Influence of number of grooves on stiffness Figure 16 presents a comparison between the coefficients of damping for two bearings with 8 grooves and 12 grooves respectively. Again, the groove number has less effect on C yy while affecting other coefficients significantly. Influence of number of grooves on damping 8 Conclusions This paper provides a new method to calculate the load capacity, the coefficients of stiffness and damping for water lubricated guide bearings with multi-axial grooves. The focus is on the effect of grooves and groove number. The paper doesn't include the effect of surface deformation. The result is an approximation and can be applied to water lubricated bearings made from hard polymers combined with lower pressure or other materials, such as Lignum Vitae wood and ceramics. The paper uses a so-called mixed scheme which means using parabolic slider for the first pad only, rest of the pads uses linear slider. The stiffness and damping of the grooved bearing was investigated considering groove effect. The coefficients of stiffness and damping demonstrated different characteristics from those with no grooves. Since the coefficients of stiffness and damping are function of eccentricity ratio and number of grooves, the effect of number of grooves was studied in great depth. It showed that the number of grooves has less effect on the coefficient K yy and C yy while it has a larger effect on other coefficients of stiffness and damping. Further research in considering surface deformation with using similar modeling could be an interesting subject. The employer provided a great support to this work and permits publishing this paper for a good will to general public and lubrication community. The paper was created during employment at Thordon Bearings Inc. The author declares that the objective of this work was fully devoted to a solution of the particular problem in engineering and a better understanding its scientific nature in general. There is no any commercial or associated interest that represent a conflict of interest in connection with the work to a third party. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. Appendix: Parabolic Gap Sliding Bearing The procedure proposed by this paper uses a scheme by mixing different types of sliding bearings to build the entire circular bearing. One of the most important components is the parabolic gap sliding bearing. This appendix provides a procedure for deriving the four main functions, namely the load capacity, the location of load center, stiffness and damping. Without loss of generality, the same coordinate system as shown in Figure 3 is used for this procedure. The shape of parabolic gap is expressed as $$h(x,t) = h_{T} (t) \cdot \left[ {1 + (\eta - 1) \cdot \left( {\frac{x}{B}} \right)^{2} } \right],\quad \quad - B \le x \le 0,\;\eta \ge 1.$$ Introducing non-dimensional variables and parameters defined as follows: $$\begin{aligned} & x^{*} = \frac{x}{B}\;,\;\quad \tau = \frac{V \cdot t}{B},\quad h_{T}^{*} = \frac{{h_{T} (t)}}{{h_{T0} }},\quad \hfill \\ & h^{*} = h_{T}^{*} \cdot \left[ {1 - (\eta - 1) \cdot x^{*}} \right],\quad p^{*} = \frac{{(p - p_{g} ) \cdot h_{T0}^{2} }}{\mu \cdot V \cdot B},\quad \hfill \\ \end{aligned}$$ where p is the pressure over the pad with unit length, \(p_{g}\) is the pressure in water grooves, t is time. The Reynolds Equation taking into consideration on dynamic squeezing film action is as follows: $$\frac{\partial }{\partial x}\left( {h^{3} \cdot \frac{\partial p}{\partial x}} \right) = 6 \cdot \mu \cdot V \cdot \frac{\partial h}{\partial x} + 12\mu \cdot \frac{\partial h}{\partial t}.$$ Insert non-dimensional variables Eq. ( A2) into Eq. ( A3), the Reynolds equation in non-dimensional form is $$\frac{\partial }{{\partial x^{*} }}\left( {{h^*}^3 \cdot \frac{{\partial p^{*} }}{{\partial x^{*} }}} \right) = 6 \cdot \frac{{\partial h^{*} }}{{\partial x^{*} }} + 12 \cdot \frac{{\partial h^{*} }}{\partial \tau }.$$ Small perturbation method means to find a solution of Eq. ( A4) not far from the steady state solution with a linearization approach. This implies to find a solution, such as $$p^{*} = p_{o}^{*} + p_{1}^{*} \cdot \delta \cdot e^{i \cdot \tau } ,$$ $$h_{T}^{*} = 1 + \delta \cdot e^{i \cdot \tau } ,$$ where \(p_{0}^{*}\) is non-dimensional pressure under steady operation, \(p_{1}^{*}\) is perturbation amplitude of a dynamic pressure on top of the pressure under steady operation. In true sense \(p_{1}^{*}\) is a coefficient of the non-dimensional dynamic pressure. δ is a small perturbation, which is a small number much less than 1.0. Its physical meaning is the ratio of amplitude change of film thickness to the minimum film thickness under steady operation. Insert Eqs. ( A5) and ( A6) into Eq. ( A4), and equating the coefficients of zero order of " δ" on left and right side of Eq. ( A4), it is resulted in an equation for pressure \(p_{o}^{ * }\) $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_o^*}}{{\partial {x^*}}}} \right) = 12 \cdot (\eta - 1) \cdot {x^*}.$$ By the same token, by equating the coefficients of first order of " δ" on left and right side of Eq. ( A4), the coefficient of dynamic pressure \(p_{1}^{*}\) will fulfill following equation $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_1^*}}{{\partial {x^*}}}} \right) = - 24 \cdot (\eta - 1) \cdot {x^*} + 12 \cdot \left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right] \cdot i,\;i = \sqrt { - 1} .$$ In this procedure, all other terms with orders equal to and higher than \(\delta^{2}\) are neglected. The boundary conditions for the non-dimensional pressure \(p^{ * }\) are $$p^{ * } = \, 0{\text{ for }}x^{*} = 0{\text{ and }}x^{*} = {-} 1.$$ To fulfill these conditions, the non-dimensional pressure on steady operation \(p_{0}^{*}\) as well as the real and imaginary part of non-dimensional dynamic pressure all need to be zero on the boundaries. This is expressed as $$p_{0}^{*} = 0;\quad p_{1}^{ * } = p_{1,r}^{ * } = p_{1,i}^{ * } = 0\;{\text{for}}\;x = 0\;{\text{and}}\;x = - 1.$$ First is to find the solution of Eq. ( A7). By integrating twice of Eq. ( A7), the non-dimensional pressure on steady operation is expressed in following form $$p_o^* = \frac{3}{4}\left\{ {\frac{{{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{{x^{{*^3}}}(\eta - 1) - {x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\} + \frac{{{C_1}}}{8}\left\{ {\frac{{3{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{3{x^{{*^3}}}(\eta - 1) + 5{x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\}\; + {C_2}.$$ The boundary condition for \(p_{0}^{*}\) requires \(C_{2}\) = 0, and $$C_{1} = - 2\frac{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta - 2)\sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ Insert \(C_{1}\) into Eq. ( A11), the final non-dimensional pressure on steady operation takes form as below $$p_o^* = \frac{2}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + 2/3)\sqrt {\eta - 1} }} \times \left\{ {{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {x^*}{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{\left[ {1 + (\eta - 1){x^{{*^2}}}} \right]}^{\;2}}}}} \right\}.$$ The load capacity function is the integration of the non-dimensional pressure (Eq. ( A13)) $$\varPi_{P} (\eta ) = \frac{{W_{o} \cdot h_{To}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \int\limits_{ - 1}^{0} {p_{o}^{ * } (x^{ * } ,\eta ) \cdot {\text{d}}x^{ * } } .$$ The final result after implementation of the integration is $$\varPi_{P} (\eta ) = \frac{{(\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} + \sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ It is interesting to notice that there is a similarity of right side of Eq. ( A7) and the first term on right side of Eq. ( A8). Since the solution of Eq. ( A7) creates the load capacity function Eq. ( A15), the first real term on the right side of Eq. ( A8) must generate the stiffness function. This concludes that the stiffness function is just equal to two times of the load capacity function by amount, therefore $$K_{P} (\eta ) = 2\frac{{(\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} + \sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ Corresponding real part of non-dimensional dynamic pressure coefficient will be $$p_{1,r}^* = \frac{{ - 4}}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + 2/3)\sqrt {\eta - 1} }} \times \left\{ {{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {x^*}{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{\left[ {1 + (\eta - 1){x^{{*^2}}}} \right]}^{\;2}}}}} \right\}.{\rm{ }}$$ The next task is to find imaginary part of the non-dimensional dynamic pressure coefficient \(p_{1}^{*}\) which needs to fulfill following equation: $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_{1,i}^*}}{{\partial {x^*}}}} \right) = 12 \cdot \left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right].$$ Following similar procedure to solve Eq. ( A7), after integration twice of Eq. ( A18), the imaginary part of non-dimensional dynamic pressure coefficient is expressed with $$p_{1,i}^* = - 2 \cdot \frac{{{x^{{*^2}}}(\eta - 1) + 2}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}(\eta - 1)}} + \frac{{{C_3}}}{8}\left\{ {\frac{{3{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{3{x^{{*^3}}}(\eta - 1) + 5{x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\}\; + {C_4}.$$ Utilizing boundary condition to above equation, the two constants are $$C_{3} = \frac{{8\left[ {4\eta^{2} - 2(\eta + 1)} \right]}}{{3\eta^{2} \sqrt {\eta - 1} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta - 1)(3\eta + 2)}},\;C_{4} = \frac{4}{\eta - 1}.$$ Inserting them into Eq. ( A19) and integrating it over from x* = −1 to x* = 0, the damping function is as following: $${C_P}(\eta ) = - \int\limits_{ - 1}^0 {p_{1,i}^*} ({x^*},\eta ) \cdot {\rm{d}}{x^*} = \frac{{2(2\eta + 1)}}{{3\eta + 2 + \frac{{3{\eta ^2}}}{{\sqrt {\eta - 1} }}{{\tan }^{ - 1}}\sqrt {\eta - 1} }} \cdot \left( {\frac{1}{\eta } + \frac{{3{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }}} \right) - \frac{{4\eta - 1}}{{\eta (\eta - 1)}} + \frac{{3{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{(\eta - 1)}^{\frac{3}{2}}}}}.$$ The total pad pressure appears as complex function which is $$p^{ * } = p_{o}^{ * } + (p_{1r}^{ * } + i \cdot p_{1i}^{ * } ) \cdot \delta \cdot e^{i\tau } .$$ The location of load center under steady operation and dynamic vibration is slightly different. The location of load center for steady operation is calculated with $$A_{P} (\eta ) = 1 + \frac{{\int\limits_{ - 1}^{0} {x \cdot p_{o}^{ * } {\text{d}}x} }}{{\int\limits_{ - 1}^{0} {p_{o}^{ * } {\text{d}}x} }}.$$ And the location of load center for dynamic load only is calculated with $${A_{Pd}}(\eta ) = 1 + \frac{{\int\limits_{ - 1}^0 {x \cdot \sqrt {p_{1r}^{{*^2}} + p_{1i}^{{*^2}}} {\rm{d}}x} }}{{\int\limits_{ - 1}^0 {\sqrt {p_{1r}^{{*^2}} + p_{1i}^{{*^2}}} {\rm{d}}x} }}.$$ Since \(p_{1r}^{ * }\) is two times of static pressure \(p_{o}^{ * }\) and has dominate amount in comparison to \(p_{1i}^{ * }\), the value of Eq. ( A24) is not very much different from the value from Eq. ( A23). A ratio \(R_{P} (\eta ) = A_{Pd} (\eta )/A_{P} (\eta )\) was defined for comparing the difference between Eqs. ( A23) and ( A24). Similarly this ratio is also defined for exponential and linear slider (see Figure 4d). This paper used static load center for Sommerfeld Number evaluation and dynamic load center for stiffness and damping evaluation for all three types of sliding bearings. The notion \(A_{Ed} \;{\text{and}}\;A_{Ld}\) presents the dynamic load center of exponential and linear slider respectively. Zurück zum Zitat G Ren, G Auger. Water film stiffness and damping analysis of water lubricated bearings with multiple axial grooves for hydro turbines. International Conference Hydro, 2016. Montreux, Switzerland, 10‒12 Oct. 2016. G Ren, G Auger. Water film stiffness and damping analysis of water lubricated bearings with multiple axial grooves for hydro turbines. International Conference Hydro, 2016. Montreux, Switzerland, 10‒12 Oct. 2016. Zurück zum Zitat Andreas Z. Szeri. Fluid film lubrication, theory and design. 1st ed. Cambridge: Cambridge University Press, 1998. CrossRef Andreas Z. Szeri. Fluid film lubrication, theory and design. 1st ed. Cambridge: Cambridge University Press, 1998. CrossRef Zurück zum Zitat George B DuBois, Fred W Ocvirk. Analytical derivation and experimental evaluation of short-bearing approximation for full journal bearings. NACA Report 1157. George B DuBois, Fred W Ocvirk. Analytical derivation and experimental evaluation of short-bearing approximation for full journal bearings. NACA Report 1157. Zurück zum Zitat Mircea Rades. Dynamics of Machinery II. Editura Printech, 2009: 99-102. Mircea Rades. Dynamics of Machinery II. Editura Printech, 2009: 99-102. Zurück zum Zitat D Childs, H Moes, H Van Leeuwen. Journal bearing impedance descriptions for rotor dynamic applications. Transactions of ASME, 1977:198. D Childs, H Moes, H Van Leeuwen. Journal bearing impedance descriptions for rotor dynamic applications. Transactions of ASME, 1977:198. Zurück zum Zitat G Capone, V Agostino, D Guida. A finite length plain journal bearing theory. Transaction of ASME, 1994, 116: 648-653. G Capone, V Agostino, D Guida. A finite length plain journal bearing theory. Transaction of ASME, 1994, 116: 648-653. Zurück zum Zitat R S Pai, R Pai. Stability of four-axial and six-axial grooved water-lubricated journal bearings under dynamic load. Proc. IMechE Part J: J. Engineering Tribology, 2008, 222: 683-691. R S Pai, R Pai. Stability of four-axial and six-axial grooved water-lubricated journal bearings under dynamic load. Proc. IMechE Part J: J. Engineering Tribology, 2008, 222: 683-691. Zurück zum Zitat G Ren. Calculation of load capacity and water film thickness for fully grooved water lubricated main guide bearings for hydro turbines. Hydro Vision Russia, Moscow, March 3-5, 2015. G Ren. Calculation of load capacity and water film thickness for fully grooved water lubricated main guide bearings for hydro turbines. Hydro Vision Russia, Moscow, March 3-5, 2015. Zurück zum Zitat Lahmar Mustapha, Ellagoune Salah, Sou-Said Benyebka. Elastohydrodynamic lubrication analysis of a compliant journal bearing considering static and dynamic deformations of the bearing liner. Tribology Transactions, 2010, 53: 349-368. CrossRef Lahmar Mustapha, Ellagoune Salah, Sou-Said Benyebka. Elastohydrodynamic lubrication analysis of a compliant journal bearing considering static and dynamic deformations of the bearing liner. Tribology Transactions, 2010, 53: 349-368. CrossRef Zurück zum Zitat C Liu, B Yao, G Cao, et al. Numerical calculation of composite water lubricated bearing considering effect of elastic deformation. IOP Conf. Series: Materials Science and Engineering, 2020, 772: 012114. C Liu, B Yao, G Cao, et al. Numerical calculation of composite water lubricated bearing considering effect of elastic deformation. IOP Conf. Series: Materials Science and Engineering, 2020, 772: 012114. Zurück zum Zitat Edward H Smith. On the design and lubrication of water-lubricated rubber, cutlass bearings operating in the soft EHL regime . Lubricants, 2020, 8: 75. Edward H Smith. On the design and lubrication of water-lubricated rubber, cutlass bearings operating in the soft EHL regime . Lubricants, 2020, 8: 75. Zurück zum Zitat G Zhou, J Wang, Y Han, et al. Study on the stiffness and damping coefficients of water lubricated rubber bearings with multiple grooves. Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, 2016, 230(3): 323-335. CrossRef G Zhou, J Wang, Y Han, et al. Study on the stiffness and damping coefficients of water lubricated rubber bearings with multiple grooves. Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, 2016, 230(3): 323-335. CrossRef Zurück zum Zitat Q Li, S Zhang, L Ma, et al. Stiffness and damping coefficients for journal bearing using the 3D transient flow calculation. Journal of Mechanical Science and Technology, 2017, 31(5): 2082-2091. Q Li, S Zhang, L Ma, et al. Stiffness and damping coefficients for journal bearing using the 3D transient flow calculation. Journal of Mechanical Science and Technology, 2017, 31(5): 2082-2091. Zurück zum Zitat X Liang, X Yan, Z Liu, et al. Effect of perturbation amplitudes on water film stiffness coefficients of water-lubricated plain journal bearings based on CFD-FSI methods. Proc. IMechE Part J. Journal of Tribology, 2018: 1-13. X Liang, X Yan, Z Liu, et al. Effect of perturbation amplitudes on water film stiffness coefficients of water-lubricated plain journal bearings based on CFD-FSI methods. Proc. IMechE Part J. Journal of Tribology, 2018: 1-13. Zurück zum Zitat M V S Babu, A Rama Krishna, K N S Suman. Review of journal bearing material and current trends. American Journal of Material Science and Technology, 2015, 4(2): 72-83. M V S Babu, A Rama Krishna, K N S Suman. Review of journal bearing material and current trends. American Journal of Material Science and Technology, 2015, 4(2): 72-83. Zurück zum Zitat Y Chen, Y Sun, Q He, et al. Elastohydodynamic behavior analysis of journal bearing using fluid-structure interaction considering cavitation. Arabian Journal for Science and Engineering, 2019, 44: 1305-1320. CrossRef Y Chen, Y Sun, Q He, et al. Elastohydodynamic behavior analysis of journal bearing using fluid-structure interaction considering cavitation. Arabian Journal for Science and Engineering, 2019, 44: 1305-1320. CrossRef Zurück zum Zitat Elsayed K Elsayed, Alaa M A EL-Butch. A study on hydrodynamic water lubricated journal bearing. Engineering Research Journal, 2017, 153: M1-M15. Elsayed K Elsayed, Alaa M A EL-Butch. A study on hydrodynamic water lubricated journal bearing. Engineering Research Journal, 2017, 153: M1-M15. Zurück zum Zitat Saeid Dousti, Paul Allaire, Timothy Dimond, et al. An extended Reynold equation applicable to high reduced Reynolds number of journal bearings. Tribology International, 2016, 102: 182-197. CrossRef Saeid Dousti, Paul Allaire, Timothy Dimond, et al. An extended Reynold equation applicable to high reduced Reynolds number of journal bearings. Tribology International, 2016, 102: 182-197. CrossRef Zurück zum Zitat R Mallya, B S Shenoy, R S Pai, et al. Stability of water lubricated bearing using linear perturbation method under turbulent conditions. Pertanika J. Science and Technology, 2017, 25(3): 995-1008. R Mallya, B S Shenoy, R S Pai, et al. Stability of water lubricated bearing using linear perturbation method under turbulent conditions. Pertanika J. Science and Technology, 2017, 25(3): 995-1008. Zurück zum Zitat K Wu, G Zhou, X Mi, et al. Tribological and vibration properties of three different polymer material for water-lubricated bearings . Materials, 2020, 13: 3154. CrossRef K Wu, G Zhou, X Mi, et al. Tribological and vibration properties of three different polymer material for water-lubricated bearings . Materials, 2020, 13: 3154. CrossRef Zurück zum Zitat J Yang, Z Liu, X Liang, et al. Research on friction vibration of marine water lubricated rubber bearing. Tribology Online, Japanese Society of Tribologists, 2018, 13(3): 108-118. CrossRef J Yang, Z Liu, X Liang, et al. Research on friction vibration of marine water lubricated rubber bearing. Tribology Online, Japanese Society of Tribologists, 2018, 13(3): 108-118. CrossRef Zurück zum Zitat Wojciech Litwin. Properties comparison of rubber and three layer PTFE-NBR-Bronze water lubricated bearings with lubricating grooves along entire bush circumference based on experimental tests. Tribology International, 2015, 90: 404-411. CrossRef Wojciech Litwin. Properties comparison of rubber and three layer PTFE-NBR-Bronze water lubricated bearings with lubricating grooves along entire bush circumference based on experimental tests. Tribology International, 2015, 90: 404-411. CrossRef Zurück zum Zitat X Ye, J Wang, D Zhang, et al. Experimental research of journal orbit for water-lubricated bearing. Mathematical Problems in Engineering, 2016, 2016: 8361596. X Ye, J Wang, D Zhang, et al. Experimental research of journal orbit for water-lubricated bearing. Mathematical Problems in Engineering, 2016, 2016: 8361596. Zurück zum Zitat T L Daugherty. Frictional characteristics of water-lubricated compliant surface stave bearings. ASLE, Transactions, 2008, 24(3): 293-301. CrossRef T L Daugherty. Frictional characteristics of water-lubricated compliant surface stave bearings. ASLE, Transactions, 2008, 24(3): 293-301. CrossRef Zurück zum Zitat C Chen, S Li, Z Lu, et al. Experimental study on material properties of bearing bush of water lubricated bearing. IOP Conf. Series: Materials Science and Engineering, 2020, 740: 012067. C Chen, S Li, Z Lu, et al. Experimental study on material properties of bearing bush of water lubricated bearing. IOP Conf. Series: Materials Science and Engineering, 2020, 740: 012067. Zurück zum Zitat G C Brito Jr, R D Machado, A C Neto. Experimental estimation of journal bearing stiffness for damage detection in large hydrogenerators. Shock and Vibration, 2017: Article ID 4647868. G C Brito Jr, R D Machado, A C Neto. Experimental estimation of journal bearing stiffness for damage detection in large hydrogenerators. Shock and Vibration, 2017: Article ID 4647868. Zurück zum Zitat N Wang, Q Meng, P Wang, et al. Experimental research on film pressure distribution of water-lubricated rubber bearing with multi-axial grooves. Journal of Fluids Engineering, Transactions of the ASME, 2013, 135. N Wang, Q Meng, P Wang, et al. Experimental research on film pressure distribution of water-lubricated rubber bearing with multi-axial grooves. Journal of Fluids Engineering, Transactions of the ASME, 2013, 135. Zurück zum Zitat Wojciech Litwin. Experimental research on water lubricated three layer sliding bearing with lubrication grooves in the upper part of the bush and its comparison with a rubber bearing. Tribology International, 2015, 82: 153-161. CrossRef Wojciech Litwin. Experimental research on water lubricated three layer sliding bearing with lubrication grooves in the upper part of the bush and its comparison with a rubber bearing. Tribology International, 2015, 82: 153-161. CrossRef Zurück zum Zitat Wojciech Litwin, Czeslaw Dymarski. Experimental research on water-lubricated marine stern tube bearings in conditions of improper lubrication and cooling causing rapid bush wear. Tribology International, 2016, 95: 449-455. CrossRef Wojciech Litwin, Czeslaw Dymarski. Experimental research on water-lubricated marine stern tube bearings in conditions of improper lubrication and cooling causing rapid bush wear. Tribology International, 2016, 95: 449-455. CrossRef Zurück zum Zitat T A Snyder, M J Braun. On the static and dynamic performance of compliant, water-lubricated sliding bearings; perturbed Reynolds equation vs. CFD-FSI based analysis methods. 18 th EDF/Pprime Workshop, EDF Lab Paris – Saclay, October 10‒11, 2019. T A Snyder, M J Braun. On the static and dynamic performance of compliant, water-lubricated sliding bearings; perturbed Reynolds equation vs. CFD-FSI based analysis methods. 18 th EDF/Pprime Workshop, EDF Lab Paris – Saclay, October 10‒11, 2019. Zurück zum Zitat P Varpasuo, J Ahtiainen. Modeling of water lubricated bearing in hydro unit dynamic stability. HYDRO 2019, International Conference and Exhibition, Porto, Portugal, Allandega Porto Congress Center, 14-16 October, 2019. P Varpasuo, J Ahtiainen. Modeling of water lubricated bearing in hydro unit dynamic stability. HYDRO 2019, International Conference and Exhibition, Porto, Portugal, Allandega Porto Congress Center, 14-16 October, 2019. https://doi.org/10.1186/s10033-020-00492-w Chinese Journal of Mechanical Engineering Structural Stress–Fatigue Life Curve Improvement of Spot Welding Based on Quasi-Newton Method Analysis of Power Matching on Energy Savings of a Pneumatic Rotary Actuator Servo-Control System Kinematic Sensitivity Analysis and Dimensional Synthesis of a Redundantly Actuated Parallel Robot for Friction Stir Welding A Modified Friction Stir Welding Process Based on Vortex Material Flow Running-In Behavior of Wet Multi-plate Clutches: Introduction of a New Test Method for Investigation and Characterization Dynamic Stiffness Analysis and Experimental Verification of Axial Magnetic Bearing Based on Air Gap Flux Variation in Magnetically Suspended Molecular Pump Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen. Zur Marktübersicht in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH
CommonCrawl
Is there a simple relation between delta-v and travel time? For example, if it takes a delta-v of 4 km/s to fly from LEO to Mars in 8 months, according to one mission plan, would it take half the time, 4 months, if the delta-v was doubled to 8 km/s? Is there any rule of thumb relationship, or must a completely new trajectory be calculated for any change of delta-v? orbital-mechanics LocalFluffLocalFluff You seem to be thinking in terms of speed, a scalar quantity. Of course if you double speed, you cover a given distance in half the time. But velocity is a vector quantity. It has direction as well as magnitude. Here is a diagram of adding vector quantities: In the above illustration, the red vector would be the delta V (change in velocity) in going from the blue vector to the black vector. So changing speed isn't the only delta V. Changing direction also boosts delta V. For example imagine two cars with different speeds 30 mph and 40 mph. If they're going the same direction in parallel lanes, the difference is 10 mph. If they T-Bone at an intersection, the difference is 50 mph. If they meet head on, the velocity difference has a 70 mph magnitude. Direction can make a huge difference. Here's an illustration of an earth to Mars Hohmann: Look at the transfer orbit's velocity vectors at earth and at Mars. Notice the velocity vectors are pointing the same direction, so only a change in speed is needed. Now here's an illustration of a non Hohmann earth to Mars transfer: Notice that the mars velocity vector is nearly the same size as the transfer orbit's velocity vector when it crosses Mars' orbit. We don't need to speed up or slow down much. But changing direction requires a lot of delta V. Deviating from Hohmann for a 4 month trip would boost your delta V by a lot more than double. This is due to direction change non Hohmann transfers require. Here are three more examples, transfer orbits depicted by Rikki-Tikki-Tavi in another answer to this question. In Rikki's illustration 3 Vinfinity quantities are given, those with regard to earth at departure. His illustration lacks the Vinfinity numbers with regard to Mars. My illustration shows Vinfinity vectors as red arrows. It has Vinfinity quantities for both earth and Mars. Outside of a planet's sphere of influence, the transfer orbits can be modeled as ellipses or a parabola about the sun. But within a planet's sphere of influence, the path is better modeled as an ellipse about the planet. Vinfinity is a hyperbola's velocity at an infinite distance from the gravitating body. A hyperbola's speed can be found by $\sqrt{V_{esc}^2+V_{inf}^2}$. Escape velocity is $\sqrt{2GM/r}$. As you can see escape velocity grows larger as the planet gets closer. At the edge of a planet's sphere of influence, escape velocity is close to zero and the hyperbolic velocity is very close to Vinfinity. Escape velocity is around 5 km/s near Mars' surface. So a hyperbolic orbit grazing Mars' atmosphere would have a speed of $\sqrt{5^2+V_{inf}^2}$ km/s. For Rikki's 3 examples this becomes: $\sqrt{5^2+2.65^2}$ km/s which = ~5.7 km/s (the Hohmann orbit) $\sqrt{5^2+6.23^2}$ km/s which = ~8 km/s $\sqrt{5^2+20.31^2}$ km/s which = ~21 km/s (the parabolic transfer orbit) The 5.7 km/s is what needs to be shed for Mars landing coming from a Hohmann orbit. Aerobraking can accomplish this for smaller pay loads but for more massive payloads this is hard. Only .7 km/s is needed to brake into a capture orbit about Mars. Given the 8 km/s hyperbola periapsis velocity, you will need to shed twice as much kinetic for a soft landing. 3 km/s delta V would be needed to brake into a capture orbit about Mars. Rikki seems to believe this is trivial, but it's not. For a 21 km/s hyperbola periapsis velocity I don't think it's practical to use aerobraking for a soft landing. 16 km/s would need to be shed to brake into a capture orbit. 16 km/s is about what it takes to get from earth's surface to the moon's surface. HopDavidHopDavid $\begingroup$ I disagree with you about the delta V for mars orbit injection being the major factor. You can use aerobrakeing for most of that $\Delta v$, as described in my post. Nice illustration though. Did you draw it yourself? $\endgroup$ – Rikki-Tikki-Tavi Aug 26 '14 at 23:49 $\begingroup$ @Rikki-Tikki-Tavi the quantities in red are V infinities. Near Mars surface, hyperbolic velocity would be sort(vinf^2 + vesc^2). Near Mars surface Vesc is about 5 km/s. So speed would be sort(5^2 + 2.6^2) for or about 5.5 km/s Hohmann. Martian aerobraking can shed that for small payloads. $\endgroup$ – HopDavid Aug 27 '14 at 0:25 $\begingroup$ For the other transfer orbit I depicted, hyperbolic speed at Mars surface would be sort(11^2+5^2) or about 12 km/s. It would be tough for Martian aerobraking to shed that much speed. $\endgroup$ – HopDavid Aug 27 '14 at 0:28 $\begingroup$ Yes, as I described in my post, there are limits to aerobraking. But for reasonable trajectories, it works well. $\endgroup$ – Rikki-Tikki-Tavi Aug 27 '14 at 0:30 $\begingroup$ A transfer orbit from earth to Mars would be hyperbolic with regard to Mars. So there is a vinf wrt Mars. That's not shown in your diagram. Do you know what the Vinfs wrt to Mars would be? $\endgroup$ – HopDavid Aug 27 '14 at 1:03 A particular delta-v (relative to a given celestial body, such as the Moon) actually implies a particular transfer orbit toward the target body. Also, delta-v is the difference in velocity (produced by a particular maneuver), it is not an absolute velocity. So let's say you start out at 11 km/s relative to the Earth (in an approximate Earth-Moon system transfer orbit) and then near the Moon apply a further delta-v of 4 km/s along your direction of travel. Ignoring orbital effects, that will push your orbital speed to 15 km/s. If you instead double the delta-v (which means you need to bring along a lot more fuel, which increases the fuel requirement for achieving the initial E-M transfer orbit, which means you need even more fuel just to get off the ground, and round and round we go in the rocket equation...) to 8 km/s, not only do you end up in a different orbit (so have to redo all the orbital calculations) but also your final velocity is only 19 km/s at most. You gain about 26% in terms of absolute velocity by doubling the delta-v. An astronaut on a spacewalk in Earth orbit might start out with an orbital velocity of on the order of 7-8 km/s (the orbital velocity of their spacecraft) and apply a delta-v of a fraction of a m/s in order to get to where they want to go. And so on. For a perhaps somewhat extreme but very real example, consider that Apollo could abort the 11 km/s transfer orbit to the moon by applying an approximate 1.8 km/s delta-v about a third to half of the way from the Earth to the Moon. (True "abort" would require a delta-v of twice the current velocity, to reverse the direction of travel while in the end resulting in the same orbital velocity in a reversed orbit. Even ignoring whether it's practical to "reverse" the orbit like that in the first place, applying an unplanned delta-v of well over 20 km/s to something as massive as the Apollo CSM, let alone the CSM/LM pair, is not an easy feat.) That abort wasn't an option in the case of Apollo 13, but it (and the illustrative graph shown on the Wikipedia page linked) goes to show some of how it's not quite as simple as "double the delta-v, halve the time" or even "double the velocity, halve the time". a CVna CVn $\begingroup$ This is incorrect. Yes, trans lunar insertion starts at ~11 km/s when departing from LEO. But as you near the moon you are no longer traveling 11 km/s. Near apogee you are traveling around .2 km/s. Applying a 4 km/s burn near the moon would push you to 4.2 km/s orbit wrt earth (neglecting effects of moon's gravity). If you applied the same 4 km/s while traveling 11 km/s just above earth's surface, delta V would be larger (due to Oberth effect) $\endgroup$ – HopDavid Aug 26 '14 at 17:35 It's not necessary to take an entirely different route every time. There is the minimum energy transfer, that you mentioned, but you can use more fuel(or better yet, use a more efficient engine) to take a similar, but different route that takes you there faster. The raise in energy requirement is moderate at first, but becomes uncomfortable if you want to cut your travel time by more than a month or so. I think there is a table in a book I have at home ("Astronautics" by Ulrich Walter). I will let you know. Also, there are entirely different courses you can take, but I don't think any of them are faster. For example, in some constellations a Venus flyby gives you the chance to return after only a short stay at Mars, rather than waiting out a full cycle. I misremembered. In Astronautics I found the following two figures (which are even better; read below): The image above makes three examples of orbits we could fly. You could fly even more extreme courses than the 70 day one, but you can see how the required earth escape speed $v_\infty$ becomes uncomfortable. This graph gives us the the same in exact numerical terms. You can see that shortening the time a bit is quite cheap, but then it quickly becomes expensive. $t_x$ is the transfer time. $t_H$ is the time for the Hohmann-transfer. I won't explain all variables involved, I don't think anyone has the patience to read that. It is true that you also need an burn to turn into Mars' orbit, but I disagree with HopDavid about this being the major factor: The $\Delta v$ for the for the injection is $v_\infty\sqrt{\frac{1-e}{2}}$ where $e$ is the eccentricity of the orbit around mars and $v_\infty$ is the excess velocity with regard to mars now. This can be very high, because you can slowly circularize the orbit for free by aerobraking. For $e=0.95$ the Mars injection takes only 16% of $v_\infty$. Of course, there is a limit to how eccentric your orbit can be, because you may escape Mars' sphere of influence if you come in too fast. Rikki-Tikki-TaviRikki-Tikki-Tavi $\begingroup$ But one cannot take the same route faster, the (hyperbolic) trajectory will have another shape if one has a higher speed, and the distance traveled will be different because Mars is a moving target, and the Mars insertion orbit will be different, right? $\endgroup$ – LocalFluff Aug 26 '14 at 11:30 $\begingroup$ Yes. Also, the starting time frame may be different. You can't realistically go to Mars without using the Earth's orbital speed, so you are dependent on the two bodies being in specific places. These aren't Hohmann-Transfers anymore, so there is quite a bit of optimization involved. $\endgroup$ – Rikki-Tikki-Tavi Aug 26 '14 at 12:43 $\begingroup$ Escape velocity near Mars surface is about 5 km/s. You can get a Mars capture orbit's eccentricity as close as you like to 1, Let's call it .996. You're not going to get a periapsis velocity higher than 5.01 km/s. $\endgroup$ – HopDavid Aug 27 '14 at 0:46 $\begingroup$ At periapsis of the hyperbola velocity is sort(vesc^2 + vinf^2). See en.wikipedia.org/wiki/Hyperbolic_trajectory#Velocity Even in the case of Hohmann this would be about 5.6 km/s. So for injection into a Mars capture orbit, you'd need .6 km/s. But 2.6 km/s * sqrt((1-.996)/2) is about .12 km/s. I believe your equation for injection to Mars capture orbit is incorrect. $\endgroup$ – HopDavid Aug 27 '14 at 0:53 $\begingroup$ Your diagram has V infinities wrt to earth. But not the Vinfs wrt to Mars. $\endgroup$ – HopDavid Aug 27 '14 at 0:57 Not the answer you're looking for? Browse other questions tagged orbital-mechanics or ask your own question. Is it easier to land on Europa/Io than on Mars? What are the main considerations? Earth->Mars: Porkchop, departure burn and orbit inclination Relation between Orbital Inclination and launch site? Calculating a de-orbit burn, is this problem written correctly? Will NASA put astronauts into a polar lunar orbit? If so, how? Mars and back, staying only a few days
CommonCrawl
Volume 20 Supplement 2 Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): genomics Estimating the total genome length of a metagenomic sample using k-mers Kui Hua1,2 & Xuegong Zhang1,2,3 Metagenomic sequencing is a powerful technology for studying the mixture of microbes or the microbiomes on human and in the environment. One basic task of analyzing metagenomic data is to identify the component genomes in the community. This task is challenging due to the complexity of microbiome composition, limited availability of known reference genomes, and usually insufficient sequencing coverage. As an initial step toward understanding the complete composition of a metagenomic sample, we studied the problem of estimating the total length of all distinct component genomes in a metagenomic sample. We showed that this problem can be solved by estimating the total number of distinct k-mers in all the metagenomic sequencing data. We proposed a method for this estimation based on the sequencing coverage distribution of observed k-mers, and introduced a k-mer redundancy index (KRI) to fill in the gap between the count of distinct k-mers and the total genome length. We showed the effectiveness of the proposed method on a set of carefully designed simulation data corresponding to multiple situations of true metagenomic data. Results on real data indicate that the uncaptured genomic information can vary dramatically across metagenomic samples, with the potential to mislead downstream analyses. We proposed the question of how long the total genome length of all different species in a microbial community is and introduced a method to answer it. It is now widely known that microbiomes or the ecological community of microbes living at a certain site of the human host such as the gut can play important roles in human health [1–5]. Metagenomic sequencing is a powerful technology for studying the microbiome by sequencing DNAs from all the genomes of its component microbes [5]. Since it is impossible to capture the full components of a microbiome, a 'metagenomic sample' is actually a subset of the target metagenome captured with the sequencing process, as a sample from a population in statistics [6]. The basic task of a metagenomic study is to read out the underlying information about the microbiome from the metagenomic sample. For any genomic sequencing study, a fundamental property we need to consider is the sequencing coverage, which is the fraction of genomic materials that has been captured and sequenced. This, however, has been largely ignored in metagenomic studies [6]. The level of coverage of a metagenomic sample is of key importance for recovering the information about the microbiome. Variations caused by coverage differences between metagenomic samples can be wrongly attributed to biological reasons, resulting in misleading conclusions [6]. The question of estimating the coverage of a sequencing sample has been attracting researchers' attention since the beginning of human genome project. In 1988, Eric S. Lander and Michael S. Waterman introduced the famous Lander-Waterman theory to show how well a genome can be recovered for a certain sequencing strategy [7]. It had played a key role in guiding the design and completion of the human genome project. Lander-Waterman theory was specially designed for single genomic sequencing projects. It is no longer suitable for most metagenomic data since the relative abundances of component genomes in a microbiome are very uneven and therefore the sequencing procedure violates the uniform distribution assumption [8]. This is also true for other types of sequencing projects like RNA-sequencing or ChIP-seq where distributions of components to be sequenced are uneven. Methods were therefore introduced to estimate the coverage or solve similar problems in such situations [8–12]. For example, Hooper et al. proposed a method to estimate the total number of genomic bins in a metagenome by assuming certain abundance distribution of the microbial composition [8]. Rodriguez et al. assessed the abundance-weighted coverage of a metagenomic sample by examining the redundancy among individual reads [10]. Daley and Smith introduced an empirical Bayesian method to predict the number of previously un-sequenced molecules that would be observed if additional reads were provided [9]. This method has been demonstrated powerful in different kinds of sequencing data such as ChIP-seq data and RNA-seq data, but its effectiveness on metagenomic data has not been studied. For the genomic sequences that have been captured in a metagenomic sample, the basic information we want to get is what types of microbes are there at what abundances. This is referred to as taxonomy profiling. A straightforward way of taxonomy profiling is to map sequencing reads to reference genomes in known databases. Known microbial genomes only represent a small proportion of existing microbes. Even for the type of well-studied communities like human gut, it's typical that around 30%–60% of sequencing reads in a metagenomic sample could not be mapped to any known microbial genomes [13]. Furthermore, it has been observed that the fraction of unmapped reads can vary dramatically across different samples in the same study, say, ranging surprising from 2 to 96% [14]. This type of between-samples variation is lost when relative abundances are calculated based on mapped reads. Ignoring such loss of information can be misleading in downstream analyses [5]. Mainly because of the incomplete coverage and the existence of unmapped reads, the genomes that can be profiled from a metagenomic sample are only a part of all genomes that exist in the microbiome. It is therefore desirable to make estimations on the genomes that have been missed. Even if it is not possible to make accurate estimations on the number of missed genomes and their relative abundances, any educated guess about any properties of missed genomes can provide useful information for the comparison of samples based on known genomes. In this paper, we study the problem of estimating the total length of all distinct genomes in a metagenomic sample. If we can estimate this with reasonable accuracy, we will know a lot about the missed genomes by subtracting those known and mapped genomes from the total. This is the same question as estimating the actual coverage of the unknown targeting whole microbiome by the observed sequencing reads in the metagenomic sample. In preparation of this manuscript, a similar question has been studied in [15], but the method requires both long reads and short reads. For most cases where only short reads are available, we found that this question can be solved by solving the related question of estimating the number of distinct k-mers in the metagenome if we have infinite sequencing depth. A statistical model is introduced to predict the number of distinct k-mers in a metagenome that have not been included in the observed data. And we define a k-mer redundancy index (KRI) that helps to estimate the total genome length from total distinct k-mer count. Since the underlying truth is unknown in any real metagenomic data, we simulated a set of synthetic metagenomic datasets for different situations of microbial composition. Experiments on these data showed that the proposed method works well. The problem we study is to estimate the total length of distinct genomes in a microbiome based on the metagenomic sequencing data. A more accurate statement of this problem in practice depends on the criteria for two genomes to be identified as distinct from each other. This is a complicated taxonomic question considering the wide existence of strains and sub-strains within each microbial species. To focus on the key mathematic problem behind the question, we simply assume that genomes from the same species are same while genomes from different species are distinct. We will give further discussion about this later in the "Estimating KRI of the distinct genome set" section. Understanding DNA sequence as a collection of k-mers A DNA sequence can be viewed as a collection of k-mers by breaking the sequence into nucleotide substrings of length k, as illustrated in Fig. 1a. From the k-mer perspective, we define total k-mer count (TKC), distinct k-mer count (DKC) and k-mer redundancy index (KRI) as three properties of a sequence. TKC is the number of all k-mers obtained when breaking a sequence into k-mers. DKC is the amount of distinct k-mers, i.e., the amount of remaining k-mers after removing all replicates of k-mers. KRI is defined as the ratio of TKC and DKC, which reflects the degree of repetition of k-mers in the sequence. The values of these three properties depend on the target sequence and the selection of k-mer size (k). For a given k, any of the three properties can be obtained if the other two are provided. For example, TKC=DKC∗KRI, which means TKC is achievable if we know DKC and KRI of a k-mer collection. Obviously, for a sequence of length L, TKC=L−k+1, indicating that TKC can be roughly taken as the sequence length if L≫k, which is satisfied when studying genomes using small k-mers. These simple mathematical relations form the basic idea of our work. Overview of the proposed method. a An illustration of understanding DNA sequence as a collection of k-mers. In this simple case, sequence length L=12, k=6 for the k-mer counting, TKC= L−k+1=7, DKC=5, KRI=TKC/DKC=1.2. b Relationships between metagenome, metagenomic sample and the set of distinct genomes in the metagenome. c Workflow of the proposed method Similarly, a set of sequences can also be treated as a collection of k-mers by breaking every single sequence into k-mers. Therefore, a metagenomic sample, the metagenome and the set of distinct genomes in a metagenome can all be viewed as a collection of k-mers, respectively, as illustrated in Fig. 1b. Overview of our solution From the k-mer perspective, our aim of estimating total genome length of all distinct genomes in a metagenome is equivalent to estimating TKC of the set of distinct genomes (Fig. 1b). Since it is impossible to count TKC of the true metagenome from the metagenomic sample due to finite sequencing coverage and unknown genome composition, we predict TKC of the distinct genome set by estimating its DKC and KRI separately (Fig. 1c). A metagenome and the corresponding set of distinct genomes of all its components differ only in genome abundances, they share the same distinct k-mers and have equal DKCs. We estimate DKC of the metagenome from the observed metagenomic data by modeling the sequencing event as a Poisson sampling procedure. KRI of the distinct genome set can be estimated based on known genomes detected in the metagenomic sample. Finally, the total genome length, which is roughly equal to TKC, can be achieved simply by taking the product of KRI and DKC. Predicting DKC of the metagenome A metagenomic sample can be viewed as a subset of the metagenome obtained by random sampling, as illustrated in Fig. 1b. DKC of a metagenomic sample can be readily obtained by counting k-mers in the sequences, either from the original sequencing reads or from the assembled scaffolds. We need to estimate the number of k-mers in the metagenome that have not been covered in the metagenomic sample. The frequency that a given k-mer i is sequenced, denoted as xi, can be modeled as a Poisson distribution with an unknown parameter λi. The probability that k-mer i will not been sequenced is \(e^{-\lambda _{i}}\phantom {\dot {i}\!}\). We call these k-mers as uncaptured k-mers. Although the frequencies of k-mers overlapping with each other are dependent, such limited dependence can be well-approximated by assuming independence [16, 17]. Therefore, we further assume that λi independently and identically follow some unknown distribution μ(λ), the number of uncaptured k-mers is $$\begin{array}{@{}rcl@{}} N\int \limits_{0}^{\infty} e^{-\lambda} \mathrm d\mu(\lambda) \end{array} $$ where N is the DKC of the metagenome. Since both N and μ(λ) are unknown, we are not able to calculate the value of (1) directly. Fortunately, the frequencies of captured k-mers in the metagenomic sample also contain information about N and μ(λ), which would help us to estimate the value of (1). Let nj denote the number of k-mers that appear j times in the metagenomic sample. The expectation of nj can be written as $$\begin{array}{@{}rcl@{}} E(n_{j}) = N\int \limits_{0}^{\infty} e^{-\lambda}\lambda^{j}/j! \mathrm d\mu(\lambda) \end{array} $$ If we take the observation nj as its expectation E(nj), the mathematical problem of estimating the number of uncaptured k-mers can be formulated as: Given observations n1,n2,n3,…,nM, which follow the formula $$ {n_{j}= N \int \limits_{0}^{\infty} e^{-\lambda}\lambda^{j}/j! \mathrm{d} \mu(\lambda)} $$ where Nand μ(λ)are unknown. Find the value of $${N\int \limits_{0}^{\infty} e^{-\lambda}\mathrm d\mu(\lambda)} $$ To solve this mathematical problem, let ω(λ)=Nλe−λ, mi=(i+1)!ni+1, the problem can be re-written as Given observations m0,m1,m2,…,mM−1, which follow the formula $${m_{j}= \int \limits_{0}^{\infty} \lambda^{j}\omega(\lambda)\mathrm{d}\mu(\lambda)} $$ where ω(λ)and μ(λ)are unknown. Find the value of $${\int \limits_{0}^{\infty} \frac{1}{\lambda} \omega(\lambda) \mathrm d\mu(\lambda)} $$ This is a special type of Gaussian quadrature problem that can be solved using the Golub-Welsch algorithm [9, 18]. The final estimation of (1) can be written as $$\begin{array}{@{}rcl@{}} N\int \limits_{0}^{\infty} e^{-\lambda}\mathrm{d}\mu(\lambda) \approx \sum_{i=1}^{M} \frac{\alpha_{i}}{\lambda_{i}} \end{array} $$ where αi and λi are decided by the Golub-Welsch algorithm taking m0,m1,m2,…,mM−1 as the input. DKC of the metagenome is finally achieved by adding this estimated uncaptured number of k-mers to DKC of the metagenomic sample. The variability and reliability of the estimation can be reflected by the confidence interval achieved by the bootstrap method. Estimating KRI of the distinct genome set To precisely estimate KRI of the set of distinct genomes of a metagenome, one needs to know all different genomes in the metagenome, which is usually unachievable due the existence of many unknown microbes. To deal with this problem, we reasoned that KRI of a genome set can be well estimated use only part of the genomes in it. Therefore, we can use known genomes detected in a metagenomic sample to estimate the KRI of the whole distinct genome set. In practice, we first apply MetaPhlan2 [19] and GOTTCHA [20] on the metagenomic data to identify known species in the metagenome. For each detected species, we select one of its reference genomes from the database [9] to form a genome set. An alternative way to form the genome set is to take the assembled scaffolds as detected genomes. We estimated the KRI of this set of detected genomes as the KRI of the distinct genome set. The way of selecting known genomes to estimate KRI actually decides the criteria of identifying distinct genomes in our work. Since we select only one genome for each detected species to estimate the KRI of the set of distinct genomes, the estimation is restricted to species level, even if two strains of the same species were detected in the metagenomic sample. If we include genomes for all detected strains in the KRI estimation, the estimation will be at strain level. Implementation of the method We first adopt Pollux [21] to correct the sequencing error in the metagenomic samples. Counting all k-mers in a metagenomic sample can be computationally heavy. We employ jellyfish2 [22], one of the fastest k-mer counting approaches, for the k-mer counting step. We use the Golub-Welsch algorithm implemented in preseq [9, 17] to estimate the distinct k-mer count. MetaPlan2 [19] and GOTTCHA [20] are used to identify the known species from the metagenomic sample. Genomes for those known species are selected from existing database [23] to estimate the KRI for the whole community. Simulated metagenomic datasets Due to the complexity of real-world microbiome compositions, it is hard, if possible, to find real metagenomic data that have complete true answer of all components. To test the performance of our method, we simulated several microbial communities of different situations and generate synthetic metagenomic samples. We simulated communities with 10 species and 50 species as representatives of a simple case and a more complicated case. We used three types of composition abundance distributions to form microbial communities of low, medium and high complexities (LC, MC and HC) following the way of a previous simulation study [24]. LC, MC and HC are defined based on the number of dominant microbe who has a high relative abundance. LC has only one dominant microbe. MC has two or more dominant species. HC has no dominant species. The fraction of information captured by the metagenomic data is of key importance for estimating the total genome length. To reflect this property of a metagenomic sample, we define initial coverage as the fraction of distinct k-mers in the set of distinct genomes of the target community included in the sequencing data. For each community, metagenomic samples of different reads numbers were generated to simulate the situation of different sequencing depths and the initial coverages of the community. To check how robust the method is to random effect, we use three random seeds to generate samples for the same parameters. In total, 225 metagenomic samples with 10 species and 243 samples with 50 species were generated with an in-house simulation tool [25]. Beside the error-free samples, we also generated a set of metagenomic samples with sequencing errors for each community. We did some simple simulations to show that KRI of a genome set can be estimated using part of all genomes. We simulated four metagenomes with 10, 50, 100 and 200 species, respectively. For each metagenome, we randomly select 60% of its component genomes as known ones to estimate the KRI of the whole metagenome. Although in real world, the known microbes are not randomly selected from the nature, the order in which they were known has nothing to do with their sequence contents. Therefore, we believe such random selection is reasonable. Real metagenomic datasets We select two datasets to conduct our method on. One dataset contains 65 oral metagenomic samples from Human Microbiome Project (HMP) [26] and the other consists of 145 human gut metagnomic samples, including 71 from normal people and 74 from type 2 diabetes patients [27]. Results on simulated metagenomic datasets We tested our method on all synthetic metagenomic samples. Fig. 2 shows how well the number of distinct k-mers (DKC) in a community can be estimated from a metagenomic sample. The whole figure contains two parts, showing results for communities with 10 species and 50 species, respectively. Each part consists of three panels, displayed from left to right. Further explanations about each panel are given in the figure caption. As expected, the overall prediction in samples with 10 species is better than in samples with 50 species. Communities with high complexity achieve best prediction accuracy among those three kinds of abundance distributions. This agrees with the intuition that the more even the abundance distribution is, the better the prediction will be. The performances on communities with medium complexity are the worst. This is because the two dominant species make up more than 70% of the community, which means that most of the reads are sequenced from them. Since less than 30% of the reads come from the rest of all species, only a small part of information about their genomes is reflected in the sequencing data, leading to the bad performance, especially when sequencing depth is low. We also show how the performance goes when the initial coverage increases. The performance is measured by relative error, defined as the difference between estimated value and the true value divided by the true value. In general, the performance gets better as the initial coverage increases. Another interesting observation is that, for most cases, Golub-Welsch algorithm gives a good estimation which trends to be no larger than the ground truth, and the corresponding bootstrap confidence interval is usually small. For the exaggerated estimations, Golub-Welsch algorithm is more likely to give a large bootstrap confidence interval. Therefore, Golub-Welsch algorithm provides a reliable estimation of the lower bound of DKC, as suggested in preseq [9]. Different microbial communities are simulated to test the performance of the proposed method. (a) Results for microbial communities with 10 species. The three histograms on the left show the abundance distributions of different simulated communities. The middle panel shows the estimation results of distinct k-mer count. Each bar represents an estimation result based on a synthetic metagenomic sample and the error bar shows the 95% bootstrap confidence interval of the estimation. The black dash line is the true distinct k-mer count. The right panel shows how the relative error goes as the initial coverage increases (k = 20). (b) The same as (a) except that the species number is 50. (Note that some of the samples with 10 species are not shown in the barplot, see Additional file 1: Figure S1 for all samples with 10 species) Effects of K and sequencing errors To see how the parameter k affects the results, We chose different k to do the estimation for a simulated metagenomic sample (50 species, high complexity, 25 million reads). Results show that the estimation is robust to the selection of k (Fig. 3c). a Performance on metagenomic data with sequencing errors. b True and estimated K-mer Redundant Index (KRI) in different metagenomics communities. About 60% of the species are randomly chosen as the known species to estimate the KRI of all species. c Results of different selections of K. Simulated metagenomic sample with 50 speices and high complexity of the abundance distribution was used. d Results on HMP Tongue Dorsum datasets Despite the good performance on error-free sequencing data, the Golub-Welsch algorithm can given bad prediction when the sequencing data contains errors (Fig. 3a). Sequencing errors introduce novel k-mers that should not exist in the data. A higher fraction of low-count k-mers will be considered by the algorithm as the implication of more low-abundant microbes. Therefore, sequencing errors lead to exaggerated estimation of total distinct k-mers and this exaggeration grows as the sequencing depths increases (Fig. 3a, green bars). To solve this problem, we use Pollux [21] to correct the sequencing error before counting k-mers. Results on simulation data show that the performance can be under control after correcting the sequencing errors (Fig. 3a, blue bars). Comparison between different methods Besides Golub-Welsch algorithm, we also applied the major algorithm rational function approximation (RFA) in preseq on the simulated metagenomic samples with 50 species (Additional file 1: Figure S2) and compared its performance with Golub-Welsch algorithm. Both methods achieve a good performance and each present their own strength (Additional file 1: Figure S3). RFA outperforms Golub-Welsch algorithm in the median complexity communities (two species with a total relative abundance higher than 70%), indicating a stronger ability of extrapolation. For communities with high complexity or low complexity, Golub-Welsch algorithm makes stable and accurate results with only few exceptions. RFA also gives a good result, but with a slight trend to exaggerate the estimation. Estimating KRI using known species There's a gap between distinct k-mer count (DKC) and total genome length or TKC. We use KRI to bridge this gap as introduced above. For simulated metagenomic samples, GOTTCHA succesfully identified most species therefore led to a perfect estimation of KRI. We did some simple simulations to show that KRI of a genome set can be estimated using part of all genomes. In general, KRI of the community increases as there are more species in the community, as shown in Fig. 3b. The result shows that KRI of a community can be well estimated use only part of the species, which demonstrates the feasibility of estimating KRI of a community based only on known species. Results on real metagenomic datasets We applied our method on the two selected datasets (Figs. 3d and 4). One general observation in the results is that, the number of uncaptured k-mers can differ a lot between samples, even when the observed k-mer counts are similar (Figs. 3d and 4a). Further comparison between normal samples and T2D samples shows that the predicted distinct k-mer counts present significant difference while observed k-mer counts do not (Fig. 4c and d). In the original study, it was reported that the difference of within-sample diversity (entropy of gene abundance) between normal group and T2D group is not significant [27]. Since the gene abundances were calculated based only on extracted sequence data, chances are that the significance had been masked by ignoring the difference in the 'unseen' information. Results on T2D metagenomic datasets. a Observed and estimated k-mer count. b Histogram and density of the observed distinct k-mer count. c Histogram and density of the predicted distinct k-mer count In this paper, we proposed the question of 'how long the total genome length of all different species in a microbial community is' and introduced a method to answer it. This is an important step toward the estimation of unknown and unseen component genomes in a microbiome. We invented a k-mer-based strategy to liberate the reliance on the limited microbial reference genomes so that unknown species can be included in the estimation. To explore the information that has not been directly captured in the metagenomic sample, we developed a statistical method to estimate the number of uncaptured k-mers. Distinct k-mer count was multiplied by the k-mer redundancy index (KRI), an index defined to reflect the repetition of k-mers and estimated from known species, to get the total genome length. Performance on the simulation data shows that the proposed method works well, and the precision of the estimation is mainly affected by factors including the sequencing error, the initial coverage of the community and the complexity of the microbial diversity. Extracting information from the metagenomic data is the foundation of downstream analysis. The complex nature of microbial community and inadequate microbial diversity represented in existing databases make it challenging to extract the full information. A metagenomic sample can capture only part of the information about the microbial community due to its complexity, among which only part can be extracted due to the limited known references. Ignoring these 'uncaptured' and 'unknown' information can mislead downstream analyses. In the work of estimating total genome length, we adopted the reference-free strategy to include the 'unknown' information and a statistical model was employed to estimate the 'uncaptured' part so that the completeness of the extracted information can be pursued to the maximum. The experiments on simulated data showed the feasibility of the proposed method and results on real datasets revealed that downstream analyses may be biased if 'unseen' information is ignored. Further studies are needed in the future to explore ways by which the estimated total metagenome length can help to better extracting information about unknown or uncaptured species from the metagenomic data and comparing metagenome samples. DKC: Distinct k-mer count KRI: K-mer redundancy index TKC: Total k-mer count Gordon JI. Honor thy gut symbionts redux. Science. 2012; 336(6086):1251–3. Falony G, Wijmenga C, Raes J, et al. Population-level analysis of gut microbiome variation. Science. 2016; 352(6285):560–4. Zhernakova A, Wijmenga C, Fu J, et al. Population-based metagenomics analysis reveals markers for gut microbiome composition and diversity. Science. 2016; 352(6285):565–9. Cui H, Li Y, Zhang X. An overview of major metagenomic studies on human microbiomes in health and disease. Quant Biol. 2016; 4(3):192–206. Zhang X, Liu S, Cui H, Chen T. Reading the underlying information from massive metagenomic sequencing data. Proc IEEE. 2017; 105(3):459–73. Rodriguez RL, Konstantinidis KT. Estimating coverage in metagenomic data sets and why it matters. ISME J. 2014; 8(11):2349–51. Lander ES, Waterman MS. Genomic mapping by fingerprinting random clones: a mathematical analysis. Genomics. 1988; 2(3):231–9. Hooper SD, Dalevi D, Pati A, Mavromatis K, Ivanova NN, Kyrpides NC. Estimating dna coverage and abundance in metagenomes using a gamma approximation. Bioinformatics. 2010; 26(3):295–301. Daley T, Smith AD. Predicting the molecular complexity of sequencing libraries. Nat Methods. 2013; 10(4):325–7. Rodriguez RL, Konstantinidis KT. Nonpareil: a redundancy-based approach to assess the level of coverage in metagenomic datasets. Bioinformatics. 2014; 30(5):629–35. Tamames J, de la Pena S, de Lorenzo V. Cover: a priori estimation of coverage for metagenomic sequencing. Environ Microbiol Rep. 2012; 4(3):335–41. Wendl MC, Kota K, Weinstock GM, Mitreva M. Coverage theories for metagenomic dna sequencing based on a generalization of stevens' theorem. J Math Biol. 2013; 67(5):1141–61. Segata N, Waldron L, Ballarini A, Narasimhan V, Jousson O, Huttenhower C. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat Methods. 2012; 9(8):811–4. Oh J, Byrd AL, Deming C, Conlan S, Program NCS, Kong HH, Segre JA. Biogeography and individuality shape function in the human skin metagenome. Nature. 2014; 514(7520):59–64. Bankevich A, Pevzner PA. Joint analysis of long and short reads enables accurate estimates of microbiome complexity. Cell Syst. 2018; 7(2):192–200. Barbour AD, Chen LHY, Loh WL. Compound poisson approximation for nonnegative random-variables via stein method. Ann Probab. 1992; 20(4):1843–66. Daley T, Smith AD. Modeling genome coverage in single-cell sequencing. Bioinformatics. 2014; 30(22):3159–65. Golub GH, Welsch JH. Calculation of gauss quadrature rules. Math Comput. 1969; 23(106):221–30. Truong DT, Franzosa EA, Tickle TL, Scholz M, Weingart G, Pasolli E, Tett A, Huttenhower C, Segata N. Metaphlan2 for enhanced metagenomic taxonomic profiling. Nat Methods. 2015; 12(10):902–3. Freitas TAK, Li P-E, Scholz MB, Chain PS. Accurate read-based metagenome characterization using a hierarchical suite of unique signatures. Nucleic Acids Res. 2015; 43(10):69. Marinier E, Brown DG, McConkey BJ. Pollux: platform independent error correction of single and mixed genomes. BMC Bioinformatics. 2015; 16:10. Marcais G, Kingsford C. A fast, lock-free approach for efficient parallel counting of occurrences of k-mers. Bioinformatics. 2011; 27(6):764–70. Pruitt KD, Tatusova T, Maglott DR. Ncbi reference sequences (refseq): a curated non-redundant sequence database of genomes, transcripts and proteins. Nucleic Acids Res. 2007; 35(Database issue):61–5. Mavromatis K, Hugenholtz P, Kyrpides NC, et al. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods. Nat Methods. 2007; 4(6):495–500. Liu S, Hua K, Chen S, Zhang X. Comprehensive simulation of metagenomic sequencing data with non-uniform sampling distribution. Quant Biol. 2018; 6(2):175–85. Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI. The human microbiome project. Nature. 2007; 449(7164):804–10. Qin J, Kristiansen K, Wang J, et al. A metagenome-wide association study of gut microbiota in type 2 diabetes. Nature. 2012; 490(7418):55–60. The publication of this work was sponsored by the National Natural Science Foundation of China (61673231 and 61721003). K-mer count tables for all simulated datasets and real datasets can be found at https://github.com/stevenhuakui/Total-genome-length-data. About this supplement This article has been published as part of BMC Genomics Volume 20 Supplement 2, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-20-supplement-2.; MOE Key Laboratory of Bioinformatics Division and Center for Synthetic & System Biology, BNRIST, Beijing, 100084, China Kui Hua & Xuegong Zhang Department of Automation, Tsinghua University, Beijing, 100084, China School of Life Sciences, Tsinghua University, Beijing, 100084, China Xuegong Zhang Kui Hua KH conceived the study, developed methodology, performed data analysis and wrote the manuscript. XZ conceived the study and wrote the manuscript. Both authors have read and approved the final manuscript. Correspondence to Xuegong Zhang. This file contains Figure S1 – Figure S3. (PDFk 6194 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Hua, K., Zhang, X. Estimating the total genome length of a metagenomic sample using k-mers. BMC Genomics 20, 183 (2019). https://doi.org/10.1186/s12864-019-5467-x DOI: https://doi.org/10.1186/s12864-019-5467-x Sequencing coverage Distinct k-mers Genome length
CommonCrawl
EXTRA EXTRA (positrons)! Read all about it! Posted by David Zaslavsky on April 9, 2013 12:54 AM — Comments positrons Last week, I wrote about the announcement of the first results from the Alpha Magnetic Spectrometer: a measurement of the positron fraction in cosmic rays. Although AMS-02 wasn't the first to make this measurement, it was nevertheless a fairly exciting announcement because they confirm a drastic deviation from the theoretical prediction based on known astrophysical sources. Unfortunately, most of what you can read about it is pretty light on details. News articles and blog posts alike tend to go (1) Here's what AMS measured, (2) DARK MATTER!!!1!1!! All the attention has been focused on the experimental results and the vague possibility that it could have come from dark matter, but there's precious little real discussion of the underlying theories. What's a poor theoretical physics enthusiast to do? Well, we're in luck, because on Friday I attended a very detailed presentation on the AMS results by Stephane Coutu, author of the APS Viewpoint about the announcement. He was kind enough to point me to some references on the topic, and even to share his plots comparing the theoretical models to AMS (and other) data, several of which appear below. I never would have been able to put this together without his help, so thanks Stephane! Time to talk positrons. The Cosmic Background When people talk about "known astrophysical sources" of positrons, they're mostly talking about cosmic rays. Not primary cosmic rays, though, which are the particles that come directly from pulsars, accretion discs, or whatever other sources are out there. Primary cosmic rays are generally protons or atomic nuclei. As they travel through space, they decay into other particles, secondary cosmic rays, through processes like this: $$\begin{align}\prbr + \text{particle} &\to \pipm + X \\ \pipm &\to \ualp\unu \\ \ualp &\to \ealp\enu\uanu\end{align}$$ Positrons in the energy range AMS can detect, below \(\SI{1}{TeV}\) or so, mostly come from galactic primary cosmic rays (protons). We can determine the production spectrum of these cosmic ray protons (how quickly they are produced at various energies) using astronomical measurements like the ratio of boron to carbon nuclei and the detected flux of electrons — but that's a whole other project that I won't get into here. Once the proton spectrum is set, we can combine it with the density of the interstellar medium to determine how often reactions like the one above will occur, again as a function of energy. That gives us a spectrum for positron production. But to actually match this model to what we detect in Earth orbit, we need to account for various energy loss mechanisms that affect cosmic rays as they travel. Both primary (protons) and secondary (positrons) cosmic rays lose energy to processes like synchrotron radiation (energy losses as charged particles change direction in a magnetic field), bremsstrahlung (energy losses from charged particles slowing down in other particles' electric field), and inverse Compton scattering (charged particles "bouncing" off photons). These dissipative mechanisms tend to reduce the positron spectrum at high energies. Doing all this accurately involves accounting for the distribution of matter in the galactic disk, and accordingly it takes a rather sophisticated computer program to get it right. The "industry standard" is a program called GALPROP, which breaks down the galaxy and its halo (a slightly larger region surrounding the disk, which contains globular clusters and dark matter) into small regions, tracks the spectra of various kinds of particles in each region, and models how the spectra change over time as cosmic rays move from one region to another. There are various models with different levels of detail, most of which are described in this paper and improved in e.g. this one and this one: The class of theories known as leaky box models (or homogeneous models) assume that cosmic rays are partially confined within the galaxy — a few leak out into intergalactic space, but mostly they stay within the galactic disk and halo. Both the distribution of where secondary cosmic rays are produced and the interstellar medium they travel through are effectively uniform. Accordingly, the times (or distances) they travel before running into something follow an exponential distribution with an energy-dependent average value \(\expect{t}\) (or \(\lambda_e = \rho v\expect{t}\)). The diffusive halo model assumes that the galaxy consists of two regions, a disk and a halo. Within these two regions, cosmic rays diffuse outward from their sources, and those that reach the edge of the halo escape from the galaxy, never to return. The diffusion coefficient is taken to be twice as large in the disk as in the halo due to the increased density of matter. The dynamical halo model is exactly like the diffusive halo model with the addition of a "galactic wind" that pushes all cosmic rays in the halo outward at some fixed velocity \(V\). There are others, less commonly used, but all these models share one significant thing in common: they give a positron fraction that decreases with increasing energy. And the first really precise measurements of cosmic ray positrons, performed by the HEAT and CAPRICE experiments, confirmed that conclusion, as shown in this plot. But new data from PAMELA, Fermi-LAT, and now AMS-02 show something entirely different! Above \(\SI{10}{GeV}\), the positron fraction actually increases with energy, showing that something must be producing additional positrons at those higher energies. The spectrum of the positron fraction excess, i.e. the difference between secondary emission predictions and the data, suggest that this unknown source produces roughly equal numbers of positrons and electrons at the energies AMS has been able to measure, with a power-law spectrum for each: $$\phi_{\mathrm{e}^\pm} \propto E^{\gamma_s},\quad E \lesssim \SI{300}{GeV}$$ As an example model, the AMS-02 paper postulated $$\begin{align}\Phi_{\ealp} &= C_{\ealp}E^{-\gamma_{\ealp}} + C_s E^{-\gamma_s} e^{-E/E_s} \\ \Phi_{\elp} &= C_{\elp}E^{-\gamma_{\elp}} + C_s E^{-\gamma_s} e^{-E/E_s}\end{align}$$ with \(E_s = \SI{760}{GeV}\) based on a fit to their data. But regardless of whether this specific formula works, the point is that secondary emission tends to produce more positrons than electrons (because most primary cosmic rays are protons, which generally decay into positrons due to charge conservation). That doesn't fit the profile. This unexplained excess is probably something else. Neutralinos Naturally, physicists are going to be most excited if the positron excess turns out to come from some previously unknown particle. The most likely candidate is the neutralino, denoted \(\tilde{\chi}^0\), a type of particle predicted by most supersymmetric theories. Neutralinos are the superpartners of the W and Z gauge bosons, and of the Higgs boson(s). According to the theories, reactions involving supersymmetric particles tend to produce other supersymmetric particles. The neutralino, as the lightest of these particles , is at the end of the supersymmetric decay chain, which makes it a good candidate to constitute the mysterious dark matter. But occasionally, neutralinos will annihilate to produce normal particles like positrons and electrons. If dark matter is actually made of large clouds of neutralinos, it's natural to wonder whether the positrons produced from their annihilation could make up the difference between the prediction from secondary cosmic rays and the AMS observations. Here's how the calculation goes. Using the mass of dark matter we know to exist from galaxy rotation curves and gravitational lensing, and assuming some particular mass \(m_{\chi}\) for the neutralino, we can calculate how many neutralinos are in our galaxy's dark matter halo. Multiplying that by the decay rate predicted by the supersymmetric theory gives the rate of positron production from neutralino decay. That rate gets plugged into cosmic ray propagation models like those described in the last section, leading to predictions for the positron flux measured on Earth. Several teams have run through the calculations and found that… well, it kind of works, but only if you fudge the numbers a bit. Neutralino annihilation predicts a roughly power-law contribution to the positron fraction up to the mass of the neutralino; that is, $$\phi_{\tilde{\chi}^0\to \mathrm{e}^{\pm}} \sim \begin{cases}C E^{\gamma_\chi},& E \lesssim m_\chi c^2 \\ \text{small},& E \gtrsim m_\chi c^2\end{cases}$$ As long as \(m_\chi \gtrsim \SI{500}{GeV}\) or so, this is exactly the kind of spectrum needed to explain the discrepancy between the PAMELA/Fermi/AMS results and the secondary emission spectrum. The problem lies in the overall constant \(C\), which you would calculate from the dark matter density and the theoretical decay rate. It's orders of magnitude too small. So the papers multiply this by an additional "boost" factor, \(B\), and examine how large \(B\) needs to be to match the experimental results. Depending on the model, \(B\) ranges from about 30 (Baltz et al., \(m_\chi = \SI{160}{GeV}\)) to over 7000 (Cholis et al., \(m_\chi = \SI{4000}{GeV}\)). Alternatively, you can assume that something is wrong with the propagation models, and that positrons lose more energy than expected on their way through the interstellar medium. This is the approach taken in this paper, which finds that increasing the energy loss rate by a factor of 5 can kind of match the positron fraction data. But that much of an adjustment to the energy loss leads to conflicts with other measurements. It winds up being an even more unrealistic model. Even if the parameters of some supersymmetric theory can be tweaked to match the data without a boost factor, there's one more problem: neutralinos decay into antiprotons and photons too. If the positron excess is caused by neutralino decay, there should be corresponding excesses of antiprotons and gamma rays, but we don't see those. It's going to be quite tricky to tune a dark matter model so that it gives us the needed flux of positrons without overshooting the measurements of other particles. There is only a small range of values of mass and interaction strength that would be consistent with all the measurements. So as much as dark matter looks like an interesting direction for future research, it's not a realistic model for the positron excess just yet. Astrophysical sources With the dark matter explanation looking only moderately plausible at best, let's turn to other (less exotic) astrophysical sources. There's a fair amount of uncertainty about just how many cosmic rays are produced even by known sources. They could be emitting enough electrons and positrons to make the difference between the new data and the theories. Pulsars in particular, in addition to being sources of primary cosmic rays (protons), are often surrounded by nebulae that emit electrons and positrons from their outer regions. The pulsar's solar wind interacts with the nebula to accelerate light particles to high energies, giving these systems the name of pulsar wind nebulae (PWNs). Simply by virtue of being a PWN, it's expected to emit a certain "baseline" positron and electron flux, which is included in secondary emission models, but the pulsar could have been much more active in the past, emitting a lot more positrons and electrons. These would have become "trapped" in the surrounding nebula and continued to leak out over time, which means we would be seeing more positrons and electrons than we'd expect to based on the pulsar's current activity. There are a few nearby PWNs which seem like excellent candidates for this effect, going by the (rather snazzy, if you ask me) names of Geminga and Monogem. A number of papers (Yüskel et al., and recently Linden and Profumo) have crunched the numbers on these pulsars, and they find that the positron/electron flux from enhanced pulsar activity can match up quite well with the positron fraction excess detected by PAMELA, Fermi-LAT, and AMS-02. The "smoking gun" that would definitely (well, almost definitely) identify a pulsar as the source of the excess would be an anisotropy in the flux: we'd see more positrons coming from the direction of the pulsar than from other directions in the sky. Now, AMS-02 (and Fermi-LAT before it) looked for an effect of this sort, and they didn't find it — but according to Linden and Profumo, it's entirely possible that the anisotropy could be very slight, less than what either experiment was able to detect. We'll have to wait for future experimental results to check that hypothesis. Modified secondary emission Of course, it's important to remember (again) that all these analyses are based on the propagation models that tell us how cosmic rays are produced and move through the galaxy. It's entirely possible that adjusting the propagation models alone, without involving any extra source of positrons, would bring the predictions from secondary emission in line with the experimental data. A paper by Burch and Cowsik looked at this possibility, and it turns out that something called the nested leaky-box model can fix the positron fraction discrepancy fairly well. As I wrote back in the first section, the leaky box model gets its name because cosmic rays are considered to be partially confined within the galaxy. Well, the nested leaky box model adds the assumption that cosmic rays are also partially confined in small regions around the sources that produce them. That means that, rather than being produced uniformly throughout the galaxy, secondary cosmic rays come preferentially from certain regions of space. This is actually similar to the hypothesis from the last section, of extra positrons coming from PWNs, so it shouldn't be too surprising that using the nested leaky box model can account for the data about as well as the pulsars can. All the media outlets reporting on the AMS results have been talking about the dark matter hypothesis, even going so far as to say AMS found evidence of dark matter — but clearly, that's not the case. There's no reason to say we have evidence of dark matter when there are perfectly valid, simpler, maybe even better explanations for the positron fraction excess at high energies! There's just not enough data yet to tell which explanation is right. As AMS-02 continues to make measurements over the next decade or so, there are two main things to look for that will help distinguish between these models. First, does the positron fraction stop rising? And if so, where on the energy spectrum does it peak? As we've seen, this can happen in any model, but if neutralino annihilation is the right explanation, that peak will have to occur at an energy compatible with other constraints on the neutralino mass. Perhaps more importantly, is there any anisotropy in the direction from which these positrons are coming? If there is, it would pretty strongly disfavor the dark matter hypothesis. The anisotropy itself could actually point us toward the source of the extra positrons. So even if we don't wind up discovering a new particle from this series of experiments, there's probably something pretty interesting to be found.
CommonCrawl
Why was quantum mechanics regarded as a non-deterministic theory? It seems to be a wide impression that quantum mechanics is not deterministic, e.g. the world is quantum-mechanical and not deterministic. I have a basic question about quantum mechanics itself. A quantum-mechanical object is completely characterized by the state vector. The time-evolution of state vector is perfectly deterministic. The system, equipment, environment, and observer are part of the state vector of universe. The measurements with different results are part of state vector at different spacetime. The measurement is a complicated process between system and equipment. The equipment has $10^{23}$ degrees of freedom, the states of equipment we neither know nor able to compute. In this sense, the situation of QM is quite similar with statistical physics. Why can't the situation just like statistical physics, we introduce an assumption to simply calculation, that every accessible microscopic state has equal probability? In QM, we also introduce an assumption about the probabilistic measurement to produce the measurement outcome. PS1: If we regarded non-deterministic is intrinsic feature of quantum mechanics, then the measurement has to disobey the Schrödinger picture. PS2: The bold phase argument above does not obey the Bell's inequality. In the local hidden variable theory from Sakurai's modern quantum mechanics, a particle with $z+$, $x-$ spin measurement result corresponds to $(\hat{z}+,\hat{x}-)$ "state". If I just say the time-evolution of universe is $$\hat{U}(t,t_0) \lvert \mathrm{universe} (t_0) \rangle = \lvert \mathrm{universe} (t) \rangle.$$ When the $z+$ was obtained, the state of universe is $\lvert\mathrm{rest} \rangle \lvert z+ \rangle $. Later the $x-$ was obtained, the state of universe is $\lvert\mathrm{rest}' \rangle \lvert x- \rangle $. It is deterministic, and does not require hidden-variable setup as in Sakurai's book. PS3: My question is just about quantum mechanics itself. It is entirely possible that the final theory of nature will require drastic modification of QM. Nevertheless it is outside the current question. PS4: One might say the state vector is probabilistic. However, the result of measurement happens in equipment, which is a part of total state vector. Given a probabilistic interpretation in a deterministic theory is logical inconsistent. quantum-mechanics probability determinism bells-inequality Keep these mind $\begingroup$ Quantum mechanics is deterministic, but it is also probabilistic -- i.e. you can deterministically calculate the probability of a random event happening. This is to distinguish it from non-deterministic (i.e. stochastic) systems where you do not generally have "one" solution but an entire family of solutions depending on random variables. $\endgroup$ – webb May 2 '14 at 22:33 $\begingroup$ If I know the wavefunction, or state vector, more generally, of the universe, then I don't need the probability anymore $\endgroup$ – user26143 May 3 '14 at 7:34 $\begingroup$ If you know the state vector of the universe, then this still doesn't give you information about exact outcome of any quantum experiment — only probabilities. $\endgroup$ – Ruslan May 3 '14 at 7:56 $\begingroup$ If the equipment and system are governed by the Schrodinger picture, there is no (strict, by means of not in the sense happened in statistical mechanics) probability. If there is (strict) probability, then the Schrodinger picture is incomplete. $\endgroup$ – user26143 May 3 '14 at 8:22 $\begingroup$ It is not clear what you are asking. Quantum theory is non-deterministic in the sense that it works with objects ($\psi$ functions, kets) that can be used to calculate probabilities, not the actual results. It is the same as in statistical physics, only probabilistic statements can be derived. $\endgroup$ – Ján Lalinský May 3 '14 at 9:53 I agree with much of what you write in your question. Whether quantum mechanics is considered to be deterministic is a matter of interpretation, summarised in this wiki comparison of interpretations. The wiki definition of determinism is this context, which I think is entirely satisfactory, is Determinism is a property characterizing state changes due to the passage of time, namely that the state at a future instant is a function of the state in the present (see time evolution). It may not always be clear whether a particular interpretation is deterministic or not, as there may not be a clear choice of a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic and the other not. In, for example, many-worlds interpretation, time evolution is unitary and is governed entirely by Schrödinger's equation. There is nothing like the "collapse of the wave-function" or a Born rule for probabilities. In other interpretations, for example, Copenhagen, there is a Born rule, which introduces a non-deterministic collapse along with the deterministic evolution of the wave-function by Schrödinger's equation. In your linked text, the author writes that quantum mechanics is non-deterministic. I assume the author rejects the many-worlds and other deterministic interpretations of quantum mechanics. Aspects of such interpretations remain somewhat unsatisfactory; for example, it is difficult to calculate probabilities correctly without the Born rule. innisfreeinnisfree $\begingroup$ The problem is that in many-worlds interpretation there is no deterministic connection between the state and the observed behavior since theoretically all branches co-exist, but in practice only one is observed. "Determinism" is a linguistic sleight of hand. Bohmian mechanics is indeed deterministic, but it involves faster than light signals and ephemeral entities (Bohmian particles) unobservable in principle, like ether. For that matter Everett's branches are much like ether as well, and play the same role as Bohmian particles. $\endgroup$ – Conifold Apr 28 '16 at 1:46 Quantum mechanics is non deterministic of actual measurements even in a gedanken experiment because of the Heisenberg Uncertainty Principle, which in the operator representation appears as non commuting operators. It is a fundamental relation of quantum mechanics: If you measure the position accurately, the momentum is completely undefined. The interpretation of the solutions of Schrodinger's equation as predicting the behavior of matter depends on the postulates: the state function determined by the equation is a probability distribution for finding the system under observation with given energy and coordinates. This does not change if large ensembles are considered except computationally. The probabilistic nature will always be there as long as the theory is the same. anna vanna v $\begingroup$ You are wrong. The HUP is not optional. The total universe obeys the HUP postulate so as far as the theory of quantum mechanics goes, which is what you are asking, it will always be indeterminate by construction of the theory. It was constructed to fit observations and if you extrapolate to the total universe it makes no difference. (You said you are not considering other theories ) $\endgroup$ – anna v May 3 '14 at 12:02 $\begingroup$ When measuring one particle's x then going to the next, their momentum will be indeterminate and "next" will have a whole phase space to be chosen from because momentum determines the next probability of x, not a point but a probability of being found at that point, whether 1 2 3 or infinite number of particles. $\endgroup$ – anna v May 3 '14 at 12:48 $\begingroup$ The HUP is a postulate incorporated into the mathematics of commutators. $\endgroup$ – anna v May 3 '14 at 12:50 $\begingroup$ No, the schrodinger picture gives a probability of finding any measurement value, not a fixed value of the momentum. One has to operate on the schrodinger state function, with the momentum operator to get the momentum, and the operation.measurement will give a value within the probability envelope. $\endgroup$ – anna v May 3 '14 at 13:05 $\begingroup$ HUP isn't critical to determinism, the key point is born rule/wave function collapse. i think this answer is off target. $\endgroup$ – innisfree May 3 '14 at 19:11 The difference between statistical physics and quantum mechanics is that, in statistical physics, it is always reasonable to either measure a quantity, or demonstrate that the effect of that quantity can be bundled into an easy to work with random variable, often through the use of the Central Limit Theorem. In such situations, it can be shown that the answer will be a deterministic answer plus a small perturbation from the random variables with a 0 expectation and a very small variance. In quantum mechanics, the interesting properties show up in situations where its not possible to measure a quantity and not plausible to bundle it up into a random variable using the central limit theorem. Sometimes you can, of course: in particular, this approach works well in modeling an quantum mechanic system which is already well modeled in classical physics. For the most part, we don't observe many quantum effects in day to day life! However, quantum mechanics is focused on the more interesting regions where those unmeasurable quantities have an important impact on the outcome of the system. As an example, in many entanglement scenarios, you can get away with ignoring the correlation between the states of the particles. This is good, because in theory, there's some small level of entanglement between all particles that have interacted, and its good to know that we can often get away with ignoring this, and treating the values as simple independent and identically distributed variables. However, in the entanglement cases quantum mechanics are interested in, we intentionally explore situations where the entanglement is strong enough that that correlation can't just be handwaved away and still yield experimentally validated results. We are obliged to carry it through our equations if we want to provide a good model of reality. There are many ways to do this, and one of the dividing lines regarding the topic is the line drawn between the different interpretations of QM. Some of them hold to a deterministic model, others hold to non-deterministic arguments (the Copenhagen interpretation being an example). In general, the models which are deterministic have to give up something else which is valued by physicists. The many-worlds theory gets away with being deterministic by arguing that every possible outcome of every classical observation occurs, in its own universe. This is consistent with the equations that we believe are a good model of quantum mechanics, but comes with strange side effects when applied to the larger world (quantum suicide, for instance). The Copenhagen interpretation is, in my opinion, the most natural interpretation in that it dovetails with the way we do classical physics smoothly, without any pesky alternate realities. I have found that mere mortals are most comfortable with the intuitive leaps of the Copenhagen interpretation, as compared to the intuitive leaps of other interpretations. However, the Copenhagen interpretation is decidedly non-deterministic. Because this one seems easier to explain to many people, it has achieved a great deal of notoriety, so its non-determinism gets applied to all of quantum mechanics via social mechanisms (which are far more complicated than any quantum mechanisms!) So you can pick any interpretation you please. If you like determinism, there are plenty of options. However, one cannot use many of the basic tools of statistical mechanics to handle quantum scenarios because the basic physics of quantum mechanics leads to situations where the basic assumptions of statistical mechanics become untenable. Your example of the result of the measurement happening in the equipment is an excellent example. Like in statistical physics, the state of the measurement equipment can be modeled as a state vector, and it turns out that it's a very reasonable assumption to assume that it is randomly distributed. However, equipment designed to measure quantum effects is expressly designed to strongly correlate with the state of the particle under observation before measurement began. When the measurement is complete, the distribution of the state of the measurement equipment is decidedly poorly modeled as a state plus a perturbation with a small variance. The distribution is, instead, a very multimodal distribution, because it was correlated to the state of the particle, and most of the interesting measurements we want to take are those of a particle whose [unmeasured] state is well described by a multimodal distribution. If you learn Quantum Mechanics you will see that the observables of any quantum system depend on the state of the system(final, initial, ground state or excited state). In theory, there are a number of interpretations of Quantum Mechanics wiki, link. The mathematical formulation of quantum mechanics is built onto the notions of an operators. When you do a measurement you perturb the system state by applying an operator on it. The eigenvalue of the operator corresponds to the measured value of the system observable. However, each eigenvalue have a certain probability, and therefore by measuring(applying) an operator on the state system there will be a finite(or infinite) number of final states, each of them with a given probability. This is the essence of non-deterministic in quantum mechanics. The next question arises:how the non-deterministic applies on large scale universe and the "length" of the not-deterministic" phenomena in the universe? Because in classical theory(like general relativity, electromagnetism), you have for example the Einstein equations which govern the dynamics and they are full deterministic. Mikey MikeMikey Mike Forget interpretations. The predictions of quantum mechanics - which agree with all interpretations (by definition of 'interpretation')- does not allow prediction of experimental/observational outcomes no matter how much information is gathered about initial conditions. (You can't even get the classical information needed in classical physics because of the uncertainty principle). None of the interpretations challenge this, not even in principle. According to the math, which is wildly successful in it's predictions, a given present does not determine the future. That's why quantum mechanics is said to be indeterministic, not because of any interpretation. It doesn't matter if you believe in wave function collapse or not or other worlds or not or whatever. Saying the theory is deterministic because of some math involved in the calculation isn't related to the fact that experimental outcomes cannot be predicted, The present does not determine the future. Vector ShiftVector Shift The quantum state of a system is completely characterized by a state vector only when the system is a pure state. The state vector evolves in two different ways described by two postulates: the Schrödinger postulate (valid when there is no measurements) and the measurement postulate. The Schrödinger postulate describes a deterministic and reversible evolution $U$. The measurement postulate describes a non-deterministic and irreversible evolution $R$. $R$ is not derivable from $U$. In fact $R$ is incompatible with $U$, and that is the reason why the founder fathers introduced two evolution postulates in QM. Indeed, assuming an initial superposition of two states for the composite supersystem (system + apparatus + environment) $$|\Psi\rangle = a |A\rangle + b |B\rangle $$ the result of a measurement is either $|A\rangle$ or $|B\rangle$, but because these states are orthogonal, they cannot both have evolved from a single initial state by a deterministic, unitary evolution, since that $|A\rangle = U |\Psi\rangle$ and $|B\rangle = U |\Psi\rangle$ implies $\langle A|B\rangle = \langle\Psi |U^{*} U | \Psi\rangle = 1$, which is incompatible with the requirement of ortohogonality. So, if the result of the measurement was $|B\rangle$, the evolution was $|B\rangle = R |\Psi\rangle$. juanrgajuanrga The fact that QM is probabilistic and not deterministic is forced by the 4 rules stated below. This rules can not coexist logically to provide determinism. They lead without effort to the probablistic interpretation. Yes, unfortunately (for me) I am not a physicist. So take this with a grain of salt. Some thinking about this puzzling issue will make you have these conclusions based on well-known facts: @Quantum world: 1) Entities have a 'spread' existence. (A kind of 'field of energy' which tries to 'fill' all space). 2) Entities have some 'oscilatory' existence. (Which gives rise to 'interference' phenomena). 3) Interactions between entities are 'discrete'. (They exchange 'quanta' of somestuff). 4) Interactions use the 'minimum amount' of some 'energy stuff'. The interplaying of these facts is what gives rise to the non-determinism (probability) in QM. Let's think of a simple example: Suppose you have 3 entities A, B and C (a 1 sender & 2 receivers scenario), where A is the source of some perturbation to be sent to B and C at the 'same time'. Let's think of the perturbation in practical terms (i.e.: money) and assign it a unit of measure (dollars). Now how would A send 2 dollars total to both of them (B & C)? Well, A should give them 1 dollar each and problem solved!!!. However, there is a constraint here (remember #4) and that is: Interactions are only done with minimun currency!!!'. With that in mind, how can A give B and C one cent (minimun currency) at the same time? Well, .. It can't!!! At each time (interaction) A must choose between B or C to give away every cent until completes the 2 dollars to both of them. And if you think a little bit about it, you realize that the only objective solution for A must be to throw an imaginary coin each time to decide whom will receive the 1 cent!. [Of course, for this 1 sender & 2 receivers situation, a coin with 2 faces fits rigth!. But for others scenarios, the coin or dice will have to change.] In the analog world of classical mechanics, A would send an infinite small amount of money to both of them (no minimum currency constraint and at the same time!) and what we will see is a beautiful continuous growing of B and C money pockets. No need to deal with probabilities!!!!. If you think carefully, in plain simple terms, probability arise from the discrete nature of interactions between entities. This is the real deal which turns everything so strange and interesting. [Hope this general and somewhat vague answer gives you a clue about why probability arise in the description offered by QM] The question now is: Why it has to be like that? fantefante Not the answer you're looking for? Browse other questions tagged quantum-mechanics probability determinism bells-inequality or ask your own question. What is the motivation for introducing "ontological state" in 't Hooft's deterministic quantum mechanics Why don't physicists interpret randomness in quantum mechanics as ignorance or limitations in our knowledge? Quantum Mechanics - Hidden Variables Can I treat a quantum process as a Markov process? Manipulation of operators in quantum mechanics Different postulates and statistical interpreations of quantum mechanics On a measurement level, is quantum mechanics a deterministic theory or a probability theory? How does spin enter into the path integral approach to quantum mechanics? Operator in quantum mechanics Is non-linear quantum mechanics possible? Can quantum randomness be somehow explained by classical uncertainty?
CommonCrawl
The probability that a Binomial Distribution deviates from its mean by one standard deviation Let $X$ be a random variable that follows the Binomial Distribution $\text{BIN}(n,p)$, where $n$ is a positive integer while $p\in(0,1)$. Its mean is $np$, and standard deviation is $\sqrt{np(1-p)}$. Chebyshev's inequality yields that $$\Pr\left(|X-np| > \sqrt{np(1-p)}\right) \le 1,$$ which is trivial. Hoeffding's inequality seems not helpful to improve the bound if applied in a direct way. When $p=1/2$, is it possible to prove for all $n\ge 1$ that $$\Pr\left(|X-np| > \sqrt{np(1-p)}\right) \le \frac{1}{2}?$$ What can we say for a general $p\in(0,1)$? I found a positive answer to Question 1 (presented below; essentially the same as @Mau314's comment). However, it is not completely satisfactory because we have to verify the inequality for small n (at most 25) numerically. I am still looking forward to an answer that is completely analytical. I am teaching basic probability theory and these questions occur to me when I think about the Central Limit Theorem. When $n\rightarrow \infty$, asymptotically we have $$\Pr\left(|X-np| > \sqrt{np(1-p)}\right) \sim \Pr(|Y| > 1) < \frac{1}{2},$$ where $Y$ is a random variable that follows the Standard Normal Distribution. Hence I raise the questions by curiousity. Note that my main interest is the non-asymptotic behaviour of the probability because the asymptotic case is characterized by the CLT. One may attack the problem by directly estimate the cumulative distribution function of Binomial Distributions. To this end, bounds for Binomial Coefficients are likely necessary. Results of such kind can be found in, e.g., [Das], [Stanica], [Spencer, Chapter 5], and Wikipedia. Note that non-asymptotic estimations are needed. real-analysis probability probability-theory statistics probability-distributions edited Feb 16 at 4:40 NunoNuno $\begingroup$ I very much like your question. While I fail to prove it theoretically, I can at least say that numerical evaluations agree with your proposition at least until $n=1000$. Convergence to the value you give (approx $1/3$) is very strong there already. $\endgroup$ – Mau314 Feb 14 at 13:36 $\begingroup$ Thanks, @Mau314. You might consider upvoting the question so that more people see it. $\endgroup$ – Nuno Feb 14 at 14:01 $\begingroup$ It's not quite the kind of answer you were hoping for, but you can have some non-asymptotic quantification from the Berry-Esseen-theorem: Here it tells us that $$P\left(\frac{|S_n-np|}{\sqrt{np(1-p)}}>1\right)\leq P(|Y|>1)+\frac{1}{2*(p(1-p))^{3/2}\sqrt{n}}$$ and in the the case $p=0.5$ we can derive that your desired bound at least holds for $n\geq 712$. I know, it's very far from satisfactory... (The 712 can be improved a bit, but it will still be large.) Sorry I don't have time to write it in more detail right now. $\endgroup$ – Mau314 Feb 14 at 14:19 $\begingroup$ Thank you very much, @Mau314. I also noted the B-E bound when checking Wikipedia. $\endgroup$ – Nuno Feb 14 at 14:47 $\begingroup$ I incorporated your comment into the answer below. Thank you very very much, @Mau314. $\endgroup$ – Nuno Feb 15 at 4:21 Here is a positive answer to Question 1. However, it is not completely satisfactory because we have to verify the inequality for small $n$ (at most $25$) numerically. I am still looking forward to an answer that is completely analytical. As @Mau314 points out in his comment, a natural approach is to examine the error in the CLT, a classical result being the Berry-Esseen theorem. See also the question "Berry-Esseen bound for binomial distribution". According to the Berry-Esseen theorem, $$\sup_{x\in\mathbb{R}}|F(x) - \Phi(x)| \le \frac{C[p^2+(1-p)^2]}{\sqrt{np(1-p)}},$$ where $F$ is the cumulative distribution function of $(X-np)/\sqrt{np(1-p)}$ while $\Phi$ is that of the Standard Normal Distribution, and $C$ is a constant. It is known that $C<0.5$ (see [Shevtsova]), which holds for a large class of distrbutions besides Binomial. [Nagaev and Chebotarev, Theorem 2] proves $C\le0.4215$ for $\text{BIN}(n,1/2)$. According to such bounds, $$\Pr\left(|X-np| > \sqrt{np(1-p)}\right) \le \Pr(|Y|>1) + \frac{2C[p^2+(1-p)^2]}{\sqrt{np(1-p)}} < \frac{1}{3} + \frac{2C[p^2+(1-p)^2]}{\sqrt{np(1-p)}}.$$ Thus $$\Pr\left(|X-np| > \sqrt{np(1-p)}\right) < \frac{1}{2}$$ for $$n \ge \frac{144 C^2[p^2+(1-p)^2]^2}{p(1-p)}.$$ When $p=1/2$, we find that $n\ge 26$ suffices to guaruantee the desired inequality. Indeed, $n\ge 22$ is enough with a bit more care. One can check numberically that the desired inequality (for $p=1/2$) holds for $n\le 25$ as well. Hence the answer to Question 1 is positive. Alternatively, one can use [Hipp and Mattner, Corollary 1.5], which says that $$\left|P\left(X< \frac{n-\sqrt{n}}{2}\right)-\Phi(1)\right| \le \frac{1}{\sqrt{2\pi n}}.$$ Hence $$\Pr\left(\left|X- \frac{n}{2}\right| > \frac{\sqrt{n}}{2}\right)\le \Pr(|Y|>1) + \sqrt{\frac{2}{\pi n}}.$$ This will justify the desired inequality for $n\ge 23$. The cases with $1\le n \le 22$ can be checked numerically. $\begingroup$ Congrats! So being more careful with the constants is more worthwile than I thought! $\endgroup$ – Mau314 Feb 15 at 7:19 $\begingroup$ Thank you, @Mau314. $\endgroup$ – Nuno Feb 15 at 7:25 This is too long for a comment, but isn't a full answer. Motivation In de Moivre's derivation of his famous Central Limit Theorem for Bernoulli random variables, he also derived a local limit theorem as well: (de Moivre) Denote $P_n(k) = \binom{n}{k}p^k(1-p)^{n-k}$. Then \begin{align*} \sup_{x: |x| \le \psi(n)} \left|\frac{P_n(np + x \sqrt{npq})}{\frac{1}{\sqrt{2\pi npq}}e^{-x^2/2}} - 1\right| \rightarrow 0 \end{align*} for some function $\psi(n) = o(npq)^{1/6}$. What fails for Berry-Esseen is the fact thatt he sup is taken over $x \in \mathbb{R}$, while in reality we want it taking over $|x| \le 1$ (in the parametrization of de Moivre above). In the proof of this local limit theorem, asymptotics are used, and hence a lot of information is lost, including the exact form of $\psi(n)$ and the rate at which the expression above goes to 0. If we try to reproduce this proof, but be mindful in keeping track of all the pesky terms and bound them with inequalities rather than big-$O$ notation, then we might get somewhere. A partial journey Using Stirling's formula, we may express \begin{align*} P_n(k) = \frac{1}{\sqrt{2\pi\hat{p}\hat{q}}}\exp\left(-n D(\hat{p}\|p)\right) \mathcal{E}(n, k, n-k) \end{align*} where \begin{align*} \hat{p} &= \frac{k}{n}, \quad \hat{q} = 1 - \hat{p}, \\ D(a\|b) &= a \log\frac{a}{b} + (1-a)\log \frac{1-a}{1-b} \ge 2(a - b)^2, \\ \mathcal{E}(n,k,n-k) &= \frac{1 + \frac{1}{12n} + \frac{1}{288n^2} - \cdots}{(1 + \frac{1}{12k} + \frac{1}{288k^2} - \cdots)(1 + \frac{1}{12(n-k)} + \frac{1}{288(n-k)^2} - \cdots)} \le \exp\left(\frac{1}{12n}\right) \end{align*} Therefore, \begin{align*} P_n(k) \le \frac{1}{\sqrt{2\pi\hat{p}\hat{q}}}\exp\left(-2n (\hat{p} - p)^2\right) \exp\left(\frac{1}{12n}\right) \end{align*} Letting $k = np + x \sqrt{npq}$ for $|x| \le 1$, \begin{align*} P_n(np + x\sqrt{npq}) &\le \frac{1}{\sqrt{2\pi pq}}\exp\left(-2pq x^2 \right) \exp\left(\frac{1}{12n}\right) \frac{\sqrt{pq}}{\sqrt{\hat{p}\hat{q}}} \\ &\le \frac{1}{\sqrt{2\pi pq}}\exp\left(-2pq x^2 \right) \exp\left(\frac{1}{12n}\right) \frac{\sqrt{pq}}{\sqrt{(p - \sqrt{pq/n})(q - \sqrt{pq/n})}} \end{align*} Let $x_k = -1 + 2k/N$ for $k = 0, \cdots, N$ and arguing via Riemann integration, \begin{align*} \sum_{k=0}^{N} P_n(np + x_k\sqrt{npq})\cdot \frac{2}{N} &\le \sum_{k=0}^{N}\frac{1}{\sqrt{2\pi pq}}\exp\left(-2pq x_k^2 \right) \exp\left(\frac{1}{12n}\right) \frac{\sqrt{pq}}{\sqrt{(p - \sqrt{pq/n})(q - \sqrt{pq/n})}}\cdot \frac{2}{N} \\ &\color{red}{\approx} \exp\left(\frac{1}{12n}\right) \frac{\sqrt{pq}}{\sqrt{(p - \sqrt{pq/n})(q - \sqrt{pq/n})}} \int_{-1}^{1} \frac{1}{\sqrt{2\pi pq}}\exp\left(-2pq x^2 \right) dx \end{align*} This last step will certainly need some justification and inequalities relating Riemann sums and their respective integrals. I hope this is a good start to an analytical answer; note that I've listed as many inequalities out as possible, and while I simplified a few, you can retain them through the steps to obtained sharper bounds. Tom ChenTom Chen Not the answer you're looking for? Browse other questions tagged real-analysis probability probability-theory statistics probability-distributions or ask your own question. Berry-Esseen bound for binomial distribution Probability that X<Y for X and Y with known mean and standard deviation Approximation of the binomial distribution Proof for Standard Deviation Formula for a Binomial Distribution Normal approximation of tail probability in binomial distribution Find the probability that X lies within one standard deviation of the mean. Standard normal probability distribution Probability Distribution of a standard normal distribution with absolute value Probability of the deviation from the expected value of hypergeometric distribution The probability of observing the mean draw from binomial distribution The value of p in a Standard deviation of a binomial Distribution
CommonCrawl
Tôn Đức Thắng Gọi là diện tích của hình phẳng giới hạn bởi các đường và trục Ox. Tìm giá trị của . Phân số nào dưới đây bé hơn phân số $\frac{3}{7}?$ Một con lắc đơn dao động điều hòa tại nơi có gia tốc trọng trường với phương trình của li độ dài , t tính bằng s. Khi con lắc qua vị trí cân bằng thì tỉ số giữa lực căng dây và trọng lượng bằng: Cho hàm số . Biết với thì đường thẳng cắt tại 3 điểm O(0; 0), A, B phân biệt. Khi đó trung điểm I của đoạn AB luôn nằm trên đường gì? Read the following andmark the letter A, B, C or D on your answer sheet to indicate the correct answer to each of the questions from 22 to 29: Harvard University, today recognized as part of the top echelon of the world's universities, came from very inauspicious and humble beginning. This oldest of American universities was founded in 1636, just sixteen years after the Pilgrims landed at Plymouth. Included in the Puritan emigrants to the Massachusetts colony during this period were more than 100 graduates of England's prestigious Oxford and Cambridge universities, and these universities graduates in the New Word were determined that their sons would have the same educational opportunities that they themselves had had. Because of this support in the colony for an institution of higher learning, the General Court of Massachusetts appropriated 400 pounds for a college in October of 1636 and early the following year decided on a parcel of land for the school; this land was in an area called Newetowne, which was later renamed Cambridge after its English cousin and is the site of the present-day university. When a young minister named John Harvard, who came from the neighboring town of Charlestowne, died from tuberculosis in 1638, he willed half of his estate of 1,700 pounds to the fledgling college. In spite of the fact that only half of the bequest was actually paid, the General Court named the college after the minister in appreciation for what he had done. The amount of the bequest may not have been large, particularly by today's standard, but it was more than the General Court had found it necessary to appropriate in order to open the college. Henry Dunster was appointed the first president of Harvard in 1640, and it should be noted that in addition to serving as president, he was also the entire faculty, with an entering freshmen class of four students. Although the staff did expand somewhat, for the first century of its existence the entire teaching staff consisted of the president and three or four tutors. Question: The pronoun "they" in the second paragraph refers to _______________ Cho hình tròn có bán kính là 6. Cắt bỏ hình tròn giữa 2 bán kính OA, OB rồi ghép 2 bán kính đó lại sao cho thành một hình nón (như hình vẽ). Thể tích khối nón tương ứng đó là: Đường thẳng cắt đồ thị tại hai điểm phân biệt A, B. Khi đó diện tích tam giác OAB là: Có 2 nguồn điện sóng kết hợp thực hiện các dao động điều hòa theo phương vuông góc với mặt chất lỏng cùng tần số, lệch pha nhau là . Biết trên đường nối 2 nguồn sóng, trong số những điểm có biên độ bằng 0 thì điểm M gần đường trung trực nhất, cách nó một khoảng . Giá trị của là: Question 26: According to the passage, early violins were different from modern violins in that early violins____________. Of all modern instruments, the violin is apparently one of the simplest. It consist in essence of a hollow, varnished wooden sound box, or resonator, and a long neck covered with a fingerboard, along which four strings are stretched at high tension. The beauty of design, shape, and decoration is no accident, the proportions of the instrument are determined entirely by acoustical considerations. Its simplicity of appearance is deceptive. About 70 parts are involved in the construction of a violin. Its tone and its outstanding range of expressiveness make it an ideal solo instrument. No less important, however, is its role as an orchestral and chamber instrument. In combination with the larger and deeper-sounding members of the same family, the violins form the nucleus of the modem symphony orchestra. The violin has been in existence since about 1550. Its importance as an instrument in its own right dates from the early 1600's, when it first became standard in Italian opera orchestras. Its stature as an orchestral instrument was raised further when in 1626 Louis XIII of France established at his court the orchestra known as Les vinq-quatre violons du Roy (The King's 24 Violins), which was to become widely famous later in the century. In its early history, the violin had a dull and rather quiet tone resulting from the fact that the strings were thick and were attached to the body of the instrument very loosely. During the eighteenth and nineteenth century exciting technical changes were inspired by such composer-violinists as Vivaldi and Tartini. Their instrumental compositions demanded a fuller, clearer, and more brilliant tone that was produced by using thinner strings and a far higher string tension. Small changes had to be made to the violin's internal structure and to the fingerboard so that they could withstand the extra strain. Accordingly, a higher standard of performance was achieved, in terms of both facility and interpretation. Left-hand technique was considerably elaborated, and new fingering patterns on the fingerboard were developed for very high notes. Tính đạo hàm của hàm số .
CommonCrawl
BMC Chemistry Reversible uptake of molecular oxygen by heteroligand Co(II)–l-α-amino acid–imidazole systems: equilibrium models at full mass balance Marek Pająk1, Magdalena Woźniczka1, Andrzej Vogt2 & Aleksander Kufelnicki1 Chemistry Central Journal volume 11, Article number: 90 (2017) Cite this article The paper examines Co(II)–amino acid–imidazole systems (where amino acid = l-α-amino acid: alanine, asparagine, histidine) which, when in aqueous solutions, activate and reversibly take up dioxygen, while maintaining the structural scheme of the heme group (imidazole as axial ligand and O2 uptake at the sixth, trans position) thus imitating natural respiratory pigments such as myoglobin and hemoglobin. The oxygenated reaction shows higher reversibility than for Co(II)–amac systems with analogous amino acids without imidazole. Unlike previous investigations of the heteroligand Co(II)–amino acid–imidazole systems, the present study accurately calculates all equilibrium forms present in solution and determines the \(K_{{{\text{O}}_{2} }}\)equilibrium constants without using any simplified approximations. The equilibrium concentrations of Co(II), amino acid, imidazole and the formed complex species were calculated using constant data obtained for analogous systems under oxygen-free conditions. Pehametric and volumetric (oxygenation) studies allowed the stoichiometry of O2 uptake reaction and coordination mode of the central ion in the forming oxygen adduct to be determined. The values of dioxygen uptake equilibrium constants \(K_{{{\text{O}}_{2} }}\) were evaluated by applying the full mass balance equations. Investigations of oxygenation of the Co(II)–amino acid–imidazole systems indicated that dioxygen uptake proceeds along with a rise in pH to 9–10. The percentage of reversibility noted after acidification of the solution to the initial pH ranged within ca 30–60% for alanine, 40–70% for asparagine and 50–90% for histidine, with a rising tendency along with the increasing share of amino acid in the Co(II): amino acid: imidazole ratio. Calculations of the share of the free Co(II) ion as well as of the particular complex species existing in solution beside the oxygen adduct (regarding dioxygen bound both reversibly and irreversibly) indicated quite significant values for the systems with alanine and asparagine—in those cases the of oxygenation reaction is right shifted to a relatively lower extent. The experimental results indicate that the "active" complex, able to take up dioxygen, is a heteroligand CoL2L′complex, where L = amac (an amino acid with a non-protonated amine group) while L′ = Himid, with the N1 nitrogen protonated within the entire pH range under study. Moreover, the corresponding log \(K_{{{\text{O}}_{2} }}\) value at various initial total Co(II), amino acid and imidazole concentrations was found to be constant within the limits of error, which confirms those results. The highest log \(K_{{{\text{O}}_{2} }}\) value, 14.9, occurs for the histidine system; in comparison, asparagine is 7.8 and alanine is 9.7. This high value is most likely due to the participation of the additional effective N3 donor of the imidazole side group of histidine. The Co(II)–amac–Himid systems formed by using a [Co(imid)2]n polymer as starting material demonstrate that the reversible uptake of molecular oxygen occurs by forming dimeric μ-peroxy adducts. The essential impact on the electron structure of the dioxygen bridge, and therefore, on the reversibility of O2 uptake, is due to the imidazole group at axial position (trans towards O2). However, the results of reversibility measurements of O2 uptake, unequivocally indicate a much higher effectiveness of dioxygenation than in systems in which the oxygen adducts are formed in equilibrium mixtures during titration of solutions containing Co(II) ions, the amino acid and imidazole, separately. The capability of compounds called natural respiratory pigments to reversibly absorb molecular oxygen has been the subject of intensive research since the end of the 19th Century and has been inspiring the creation of artificial systems to imitate their activity [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. Example models of synthetic oxygen carriers include mixed complexes of the type Co(II)–auxiliary ligand–imidazole, in which imidazole coordinates in trans position against the bound O2 molecule, alike imidazole of the proximal histidine in myoglobin and hemoglobin [15]. In contrast to classical methods of preparing such compounds by mixing separate solutions of Co(II) salts, appropriate amino acids and imidazole [16,17,18], an original method has been applied, in which cobalt(II) and imidazole were introduced in the form of a polymeric, pseudo-tetrahedral, semi-conductive complex [Co(imid)2]n. This results in the formation of definite, unique structures with an imidazole molecule in an axial position opposite the O2 molecule [19,20,21,22,23,24,25,26]. [Co(imid)2]n is a coordination compound crystallizing in an infinite polymeric net, in which each cobalt(II) ion is joined via imidazole bridges with four adjacent ions of the metal [27, 28]. Each Co(II) ion forms two dative bonds with the nitrogen atoms of two deprotonated imidazole moieties and two ionic bonds with the nitrogen atoms of two other imidazoles (Fig. 1). Therefore, this alternative method of obtaining dioxygen complexes with a strictly defined structure by starting from the [Co(imid)2]n polymer is much more effective than the method in which appropriate so-called "active" complexes capable of reversible dioxygen uptake are formed in an equilibrium mixture during titration of a solution containing Co(II) ions, the suitable auxiliary ligand (e.g. amino acid) and imidazole [16, 17]. Schematic structure of the polymeric [Co(imid)2]n complex The peculiar property of O2 transport in such Co(II)–amac–Himid systems, as with the natural dioxygen carriers, results from the rapidly stabilizing equilibrium present in solution between the "active" form and the dioxygen-containing form. The "active" form, responsible for the dioxygen transport, is usually a paramagnetic, high-spin, hexacoordinate Co(II) complex of CoII(amac)2(Himid)(H2O) composition, containing two chelate–like connected amino acid molecules forming an equatorial plane, as well as two axial ligands–imidazole and water. After substitution of the dioxygen molecule for water, a dimeric, diamagnetic [CoIII(amac)2(Himid)]2O2 2− complex is formed with the O2 molecule coordinated in peroxide order i.e. with a O2 2− (μ-peroxy) bridge between two cobalt ions formally oxygenated to Co(III). This complex, because of the eventual partial irreversible oxidation of Co(II) to mononuclear Co(III) products, is frequently denoted as an intermediate oxygen adduct. Owing to the elongation of the dioxygen bond from 120.7 pm for the triplet O2 to 149.0 pm for the peroxide O2 2− anion, the oxygen adducts may be used as intermediate complexes in catalytic processes [29,30,31,32,33,34]. The O2 2− bridge (μ-peroxy) exists within pH = 3–9, but upon a rise in basicity above pH 10, this is transformed into a poorly reversible dibridged Co(III)O2 2−OH−Co(III) (μ-peroxy–μ-hydroxy) form. This double-bridge appears in place of the two carboxyl groups, which easily undergo dissociation and which are found in cis position towards the coordinated dioxygen molecule. Such a complex is a much less effective O2 carrier due to its higher affinity for autoxidation. An alternative known description of the oxygen bridges is the form type η, corresponding to "side on" bridge μ-peroxy structures [35]. In turn, acidification of the solution at a low temperature (−3 to 0 °C) leads to protonation of the μ-peroxy bridge, whereas the forming intermediate Co(III)O2 2−H+Co(III) product undergoes a rapid decay accompanied by Co(III) ion formation. In addition, at a temperature around 0 °C and in acidic medium, the O2 2− (μ-peroxy) bridge may be subsequently oxidized by means of strong oxidizers, e.g. Ce4+, MnO4 − or Cl2 ions. As a result, a paramagnetic, stable {[CoIII(amac)2(Himid)]2O2 −}+ complex is formed, with an irreversibly bound dioxygen moiety in the Co(III)–O2 −–Co(III) (μ-superoxy) bridge. All known O2 carriers (both natural and synthetic) form complexes of two types: monomeric, with an M:O2 stoichiometry of 1:1, and dimeric, with an M:O2 stoichiometry of 2:1. An analysis of the theoretically estimated values of the free standard Gibbs energy of the O2 reactions with metal ions and their complexes could be expected to favor the dimeric structures. In fact, the ΔG° value for the dimer formation reaction attains negative values for a much higher number of metals than is the case for monomer formation. This effect refers to the displacement of complex-formation decidedly to the right [36]. The data find a practical confirmation because among all the known dioxygen carriers, in aqueous solution we observe formation of stable dimeric complexes. Previous investigations of the Co(II)–amac–Himid systems have not included the key aspect, i.e. accurate calculations of the Co(II), amac and Himid concentrations at equilibrium, by using the formation constants reported in our work for analogous oxygen-free systems [37]. These calculations may allow the equilibrium concentrations of all equilibrium forms present in solution to be determined, and for the \(K_{{{\text{O}}_{2} }}\) equilibrium constants to be evaluated without using any simplified approximations, which for instance take into account only the "active" complex and the oxygen adduct within the mass balance system [19, 38]. Moreover, the advantage of the experimental methods used in the present work, i.e. a direct gas–volumetric experiment with simultaneous pH measurement, is that it allows the degree of reversibility of O2 uptake to be taken into account. As for many other complexes, including a majority of complexes with amino acids and peptides, the irreversible part of the reaction is quite rapid (e.g. t1/2 <5 min for glygly), which excludes the use of the most commonly applied method based only on potentiometric titration [39,40,41]. The optimum amac to Co(II) ratio equaled 2:1. Above this value, the amount of dioxygen taken up did not change (see Additional file 1: Figure S1). The amount of imidazole released from the [Co(imid)2]n moiety as a result of the mixed Co(II)–amac–Himid–O2 complex formation (0.3 mol Himid per 0.3 mol Co) indicates that the stoichiometric Co(II): imidazole ratio was 1:1, which confirms that one of the two [Co(imid)2]n imidazole moieties remains in the coordination sphere of cobalt(II) of the final complex (see Additional file 2: Figure S2). In other words the structure of the forming dioxygen adducts are unified by the presence of one imidazole in the coordination sphere. Investigations of oxygenation of the Co(II)–amac–Himid systems indicated that dioxygen uptake is accompanied by a rise in pH to 9–10. An example of the time dependence between pH and the number of mmoles of bound dioxygen is shown for l-α-histidine in Fig. 2. The percentage of reversibility noted after acidification of the solution to the initial pH ranged within ca 30–60% for l–α–alanine, 40–70% for l-α-asparagine and 50–90% for l-α-histidine; this rose as the share of the amino acid in the Co(II): amac: Himid ratio increased (Table 1). The results confirm that the axial imidazole plays a role in enhancing reversibility of the O2 uptake as opposed to the systems with the same amino acids but lacking imidazole [42]. Imidazole is in fact an important complement of the coordination sphere of the central ion as a donor of a free-electron pair of the N3 nitrogen. The Co(II)–l-α-histidine–Himid system at molar ratio 0.3:0.75:0.3 (mmol). Dependence of pH and number of mmole O2 bound on duration of the oxygenation reaction at a temperature of ~0°C (vertical segment corresponds to reversibility of O2 uptake after saturation = 86.69%) Table 1 Uptake of O2 by the Co(II)–l-α-amino acid–imidazole systems (duration time of uptake, final value of pH, number of mmoles of O2 bound, percentage of reversibility) In comparison with the system with histidine, the systems with alanine and asparagine demonstrated some higher values for the share of the free Co(II) ion and the particular complex species existing in solution apart from the oxygen adduct (regarded as the entire amount of cobalt engaged in both reversible and irreversible oxygenation) Fig. 3. For the two amino acids, the oxygenation reaction is right shifted to a relatively lower extent. The competitive binary complexes CoL3 (or CoL2 for histidine), which are able to reversibly take up dioxygen, are also present in solution in relatively low concentrations below 0.2% (Fig. 3); according to Fallab's rule, they have a sufficient number of 3N in the coordination sphere [40]. However, the experimental results indicate that the only active complexes taking up dioxygen in practice are heteroligand species with imidazole as the second ligand (Additional file 2: Figure S2). Therefore, the \(K_{{{\text{O}}_{2} }}\) equilibrium constants can be calculated using formula (16), where the "active" complex is an appropriate heteroligand species with a concentration directly following the full mass balance equation. The fact that the value of log \(K_{{{\text{O}}_{2} }}\) remained constant between different initial total Co(II), amino acid and imidazole concentrations, within limits of error (Table 2), indicates that the "active" complex was a heteroligand CoL2L′complex for alanine and asparagine (Fig. 4), but also for histidine although the structure differs in participation of the N3 nitrogens of the additional imidazole side group (Fig. 5). The imidazole N1–H side-group does not dissociate in the measurable pH range due to it having a pK of 14.4 [43]. In addition, for histidine, as in the case of alanine and asparagine, the dioxygen substitutes a relatively weak donor, i.e. the deprotonated carboxyl oxygen, instead of the water molecule. Percentage share of free Co(II) ion and complex forms at fixed equilibrium of O2 uptake reaction in the Co(II)–amac–Himid–O2 systems. l-α-amino acid (amac) = a alanine, b asparagine, c histidine. Internal diagrams show the equilibrium share of species other than the O2 adduct in an extended scale. Cadd O2 denotes the total concentration of the dioxygen adduct(with dioxygen bound both reversibly and irreversibly). L amac, L′ Himid. Molar ratio Co(II): amac: Himid = 0.3: 0.6:0.3 (mmol) Table 2 Equilibrium constants \(K_{{{\text{O}}_{2} }}\) of dioxygen uptake in the Co(II)–l-α-amino acid–imidazole–O2 systems Coordination modes in the Co(II)–amac–Himid system with amac = Ala (R = CH3) and Asn (R = CH2–CO–NH2). a "active" heteroligand complex CoL2L′, where L amac, L′ Himid, b dioxygen adduct Coordination modes in the Co(II)–amac–Himid system with amac = His. a "active" heteroligand complex CoL2L′, where L amac, L′ Himid, b dioxygen adduct The optical absorption spectra for the Co(II)–amac–Himid system with histidine indicate a significant increase of the molar absorption coefficients resulting from the O2 uptake (Fig. 6); similar observations have been reported for analogous systems with alanine and asparagine [38]. The low energetic asymmetric d–d band in curve (a) can be attributed to the asymmetric, quasi-octahedral T 1g→T 1g(P) transition of the Co(H2O) 2+6 aquo-ion. Curve (b) is a spectral curve mainly characterizing the formed heteroligand CoL2L′ active complex, predominating at pH ~9 under oxygen-free conditions, with a blue-shifted d–d band at λ max 485 nm (ε max ~20). Curve (c) corresponds to a μ-peroxo-type dioxygen adduct with two components of the LMCT band from the split antibonding π*(O2) orbital of dioxygen to the unfilled dσ*(Co) orbital: π*h → dσ* (in-plane) and π*v→dσ* (out-of-plane). It can be seen that the molar absorption coefficient of both the bands (ε max ~5 × 104) is much higher than that of the "active" complex. The intensity of the two LMCT components was relatively comparable, this being typical of monobridged peroxo complexes, which are usually non-planar [38, 44]. UV/Vis spectra in the Co(II)–amac–Himid system at temperature ~0 °C, where amac = l-α-histidine. Right Y-axis: molar absorption coefficients of (a) Co(II) and (b) "active" heteroligand complex CoL2L′, where L amac, L′ Himid; left Y-axis: (c) molar absorption coefficients of the dioxygen adduct As can be seen in Table 2, the highest value of log \(K_{{{\text{O}}_{2} }}\) occurs for the histidine system, most likely due to participation of the aforementioned additional effective N3 donor of the histidine imidazole side group. This is not surprising as it is already known that for histidine, the "active" complex is the most thermodynamically stable complex also under oxygen-free conditions [37]. On the other hand, the lower value of log \(K_{{{\text{O}}_{2} }}\) for the asparagine system in comparison with the alanine system is most likely due to steric hindrance, which arises from one of the asparagine amide side groups during formation of the dimer. In this case, a greater share of the amino acid in the Co(II): amac: Himid molar ratio favorably displaces the equilibrium towards oxygen adduct formation. This also makes it possible to obtain chemically reasonable (positive) solutions of equation system (1) at higher excesses of the amino acid (cf. Table 2). For the two remaining amino acids alanine and histidine, particularly histidine, the oxygen adduct (for both the reversible and irreversible parts together) almost entirely uses up the accessible cobalt when the share of the amino acid greatly exceeds the stoichiometric ratio Co(II): amac:Himid = 1:2:1; the concentrations of the other complex species, including the "active" complex, fall to such low levels that it is impossible for the equation system (1) to converge in the form of three positive solutions. At a decreased temperature close to 0 °C, the Co(II)–amac–Himid systems demonstrate enhanced reversible uptake of molecular oxygen. Coordination of the dioxygen molecule by the "active" complex occurs as exchange of the axial H2O or carboxyl oxygen to O2, occurring together with simultaneous formal intramolecular redox oxidation of Co(II) to Co(III) and the reduction of the charge of the dioxygen molecule to a bridging peroxide O2 2− ion. The log \(K_{{{\text{O}}_{2} }}\) values are highest for the oxygenated forms of the heteroligand complexes with histidine, as their coordination sphere is formed by a chelating tridentate ligand (with imidazole, NH2, COO− donors). The essential impact on the electron structure of the dioxygen bridge, and by that on reversibility of O2 uptake, is due to the first of the groups mentioned above. The two remaining amac ligands engaged in the mixed complexes (i.e. alanine and asparagine) were bidentate ligands. Even the potentially tridentate l-α-asparagine behaves as a bidentate ligand in the attainable pH range of around 9–10, illustrated in Table 1, which follows also from the previous reports concerning oxygen–free conditions. However, the reversibility of O2 uptake in the latter systems containing an axial imidazole, unequivocally indicates a much higher reversibility than that previously reported for Co(II)–amac systems in the absence of imidazole. Chemically reasonable (positive) values of both [Co(II)], [amac], [Himid] equilibrium concentrations and hence, appropriate log \(K_{{{\text{O}}_{2} }}\) values, could be attained only for limited Co(II):amac:Himid molar ratios, irrespective of the degree of equilibrium displacement towards oxygen adduct formation. l-α-amino acids: asparagine, pure, Sigma Chemical Co., histidine, pure (≥99.0%), Fluka Chemie GmbH, alanine, pure, International Enzymes Limited; polymeric [Co(imid)2]n complex prepared by A. Vogt, Faculty of Chemistry, University of Wrocław [27, 38, 45]; potassium nitrate (V), p.a., P.O.Ch. Gliwice; nitric (V) acid, p.a., P.O.Ch. Lublin; sodium hydroxide—0.5021 M solution determined by potassium hydrogen phthalate; acetone, p.a., P.O.Ch. Gliwice; oxygen pure medical (99.7–99.8%); argon, p.a. (99.999%) from Linde Gas (Poland). An isobaric laboratory set for volumetric and pH-metric measurements (see Additional file 3: Figure S3), composed of the following elements: a double-walled thermostated glass vessel of volume ca 80 mL, tightly closed with a silicon stopper and equipped with a burette nozzle supplying the 4 M HNO3; a combination pH glass electrode C2401, Radiometer (Copenhagen); a Radiometer Analytical 101 temperature sensor; a gas inlet tube (dioxygen) connected with the gas burette; outlet tube; a glass rode to hang a small glass vessel with the [Co(imid)2]n polymer. A PHM 85 Precision pH Meter Radiometer (Copenhagen), a Fisherbrand FBC 620 cryostat, Fisher Scientific, an Electromagnetic Stirrer ES 21H (Piastów, Poland), an oxygen tank with reducing valve and a CO-501 Oxygen Meter, Elmetron (Zabrze, Poland) were also used. The following glass set was used to determine the imidazole released from the coordination sphere of the mixed complexes: suction flask, water suction pump, washer, Schott funnel POR 40 (see Additional file 4: Figure S4). Measurement procedures Oxygenation reaction of the Co(II)–l-α-amino acid–imidazole systems The thermostated vessel was filled with a solution containing an exactly weighted sample of chosen amino acid, so as to obtain a predicted Co(II)–amac–Himid ratio when adding the [Co(imid)2]n polymer. Adjustment of the solution to constant ionic strength I = 0.5 M was achieved by means of potassium nitrate. The solution was topped up with water to 30 mL. A small glass vessel with 0.3 mmol of the [Co(imid)2]n polymer (hence the same 0.6 mmol of imid) was hung from a glass rod over the solution surface. After the entire vessel reached a temperature close to 0 °C [decrease of temperature inhibits the irreversible oxidation of Co(II)], the initial pH and the initial volume level in the gas burette was read and the main experiment started by inserting the polymer into the sample. The current values of pH and dioxygen volume were noted in definite time intervals up to saturation. A rise in pH was observed along with a change in color from entirely colorless to brown or even dark-brown. At the end of oxygenation, which occurred when reaching pH ≈9 to 10, the solution was acidified to the initial pH with a small aliquot of 4 M nitric acid solution. This caused a partial discoloration of the solution and evolution of dioxygen. The volume of dioxygen evolved against the total volume of dioxygen bound served as a measure of reversibility of oxygenation. Determination of reaction stoichiometry of dioxygen uptake in the Co(II)–l-α-amino acid–imidazole systems by the molar ratio method For each system under study, a dependence plot of the number of bound O2 (mmol) against the C L/C M ratio was prepared, where C L—total amac concentration, C M—total Co(II) concentration, which enabled the determination of stoichiometry of dioxygen uptake. Confirmation of the coordination mode of the central ion by determination of the number of imidazole released from the coordination sphere of the Co(II)–l-α-amac–imidazole–O2 system Exactly weighed samples of amino acid and the [Co(imid)2]n polymer were placed into a washer so as to attain a molar ratio of Co(II):l-α-amac: imidazole = 0.3: 0.9: 0.3 (mmol). The washer immersed in ice was filled with 2 mL of argonated water and then, during 15 min, the forming "active" complex was argonated continuously. After 10 min, argonation was changed to oxygenation. The final content of the washer, the freshly formed dioxygen complex, was quantitatively added to a Schott funnel previously filled with oxygenated acetone. The oxygen complex, insoluble in water, precipitated as a dark brown solid. At the moment a water suction pump was connected to the Schott funnel. Acetone was filtered off together with the water containing the imidazole released along with oxygen complex formation. The filtrate obtained was titrated potentiometrically with nitric acid. All the steps of experiment were carried out at temperature close to 0 °C. Calculations of equilibrium concentrations of Co(II), amac and Himid as well as evaluation of the equilibrium \(K_{{{\text{O}}_{2} }}\) constants The calculations were performed by means of a Mathcad 13 computer program [46]. The mass balance non–linear equation system was solved by the Levenberg–Marquardt method [47, 48], which enables a faster convergence of the solutions than the Gauss–Newton iteration. Such effect is due to the introduction of an additional λ parameter to the Gauss–Newton iteration formula, which corrects the appropriate direction of the procedure depending on whether the solutions go close to or far from the convergence series. The procedure used the corresponding equilibrium concentrations [M], [L], [L′] (where: [M] = [Co(II)]), which were the searched unknown quantities x 1, x 2, x 3 of the following system: $$\begin{aligned} f_{ 1} (x_{ 1} ,x_{ 2} ,x_{ 3} ) = \, 0 \hfill \\ f_{ 2} (x_{ 1} ,x_{ 2} ,x_{ 3} ) = \, 0 \hfill \\ f_{ 3} (x_{ 1} ,x_{ 2} ,x_{ 3} ) = \, 0 \hfill \\ \end{aligned}$$ The solution vector of the system: $$X = \left[ {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ {x_{3} } \\ \end{array} } \right]$$ follows Newton's formula: $$X_{{{\text{i}} + 1}} = X_{\text{i}} - \, \left( {F^{\prime} \, \left( {X_{\text{i}} } \right)^{ - 1} \cdot F\left( {X_{\text{i}} } \right)} \right)$$ after an appropriate initial estimation of the X 0 vector. The function vector is: $$F (X) = \left[ {\begin{array}{*{20}c} {f_{1} (x_{1} ,x_{2} ,x_{3} )} \\ {f_{2} (x_{1} ,x_{2} ,x_{3} )} \\ {f_{3} (x_{1} ,x_{2} ,x_{3} )} \\ \end{array} } \right]$$ whereas the matrix of derivatives, i.e. Jacobi matrix, is: $$F^{\prime}(X) = \left[ {\begin{array}{*{20}c} {\frac{{\partial f_{1} }}{{\partial x_{1} }}\quad \frac{{\partial f_{1} }}{{\partial x_{2} }}\quad \frac{{\partial f_{1} }}{{\partial x_{3} }}} \\ {\frac{{\partial f_{2} }}{{\partial x_{1} }}\quad \frac{{\partial f_{2} }}{{\partial x_{2} }}\quad \frac{{\partial f_{2} }}{{\partial x_{3} }}} \\ {\frac{{\partial f_{3} }}{{\partial x_{1} }}\quad \frac{{\partial f_{3} }}{{\partial x_{2} }}\quad \frac{{\partial f_{3} }}{{\partial x_{3} }}} \\ \end{array} } \right]$$ (F'(X))−1 in Eq. (3) denotes the inverted Jacobi matrix. In the mass balance system all the ligand (both amac and Himid) protonation constants as well as the complex–formation constants with Co(II) were known from the previous reports [37, 45]. In cumulative form the formation constants may be written as: $$\beta_{mll^\prime h} = {\raise0.7ex\hbox{${[{\text{M}}_{m} {\text{L}}_{l} L^\prime_{l\prime } H_{h} ]}$} \!\mathord{\left/ {\vphantom {{[{\text{M}}_{m} {\text{L}}_{l} L\prime_{l\prime } H_{h} ]} {[{\text{M}}]^{m} [{\text{L}}]^{l} [{\text{L}}^\prime ]^{l^\prime } [{\text{H}}]^{h} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${[{\text{M}}]^{m} [{\text{L}}]^{l} [{\text{L}}^\prime ]^{l^\prime } [{\text{H}}]^{h} }$}}$$ The functions used for the equation systems of l-α-alanine and l-α-asparagine were due to the fact that the mixed ML2L′ complex capable of dioxygen uptake (existing outside of the non-active mixed MLL′ complex) contains the sufficient three nitrogen donors in the coordination sphere, in accordance with Fallab's "3 N" rule [49]: $$f_{1} = C_{\text{M}} - Y[{\text{M]}} - \sum\limits_{l = 1}^{3} {\beta_{ml} [{\text{M]}}} \,[{\text{L}}]{\kern 1pt}^{l} - \sum\limits_{l' = 1}^{5} {\beta_{ml'} [{\text{M]}}} \,[{\text{L}}^{\prime}]{\kern 1pt}^{l'} - \sum\limits_{l = 1}^{2} {\beta_{mll'} [{\text{M]}}} \,[{\text{L}}]{\kern 1pt}^{l} [{\text{L}}^{\prime}] - 2C_{{{\text{O}}_{2} }}$$ $$f_{2} = C_{\text{L}} - Y_{1} [{\text{L]}} - l\sum\limits_{l = 1}^{3} {\beta_{ml} [{\text{M]}}} \,[{\text{L}}]{\kern 1pt}^{l} - l\sum\limits_{l = 1}^{2} {\beta_{mll'} [{\text{M]}}} \,[{\text{L]}}{\kern 1pt}^{l} [{\text{L}}^{\prime}] - 4C_{{{\text{O}}_{2} }}$$ $$f_{3} = C_{\text{L}^{\prime}} - Y_{2} [{\text{L}}^{\prime}] - l'\sum\limits_{l' = 1}^{5} {\beta_{ml'} [{\text{M]}}} \,[{\text{L}}^{\prime}]{\kern 1pt}^{l'} - \sum\limits_{l = 1}^{2} {\beta_{mll'} [{\text{M]}}} \,[{\text{L]}}{\kern 1pt}^{l} [{\text{L}}^{\prime}] - 2C_{{{\text{O}}_{2} }}$$ For l-α-histidine, the mixed not oxygen binding complex was a MLL′H species, in which the side group imidazole was protonated at the N3 nitrogen, thus the number of nitrogen atoms in the coordination sphere of the central ion was two, i.e. less than the minimum suggested by Fallab's rule. However, as the number of nitrogen atoms was sufficient in the "active" complex ML2L′, capable of O2 was as follows: $$f_{1} = C_{\text{M}} - Y[{\text{M]}} - \sum\limits_{l = 1}^{ 2} {\sum\limits_{h = 1}^{1} {\beta_{mlh} } } \,[{\text{M}}][{\text{L}}]{\kern 1pt}^{l} [{\text{H}}]{\kern 1pt}^{h} - \sum\limits_{l' = 1}^{5} {\beta_{ml'} [{\text{M]}}} \,[{\text{L}}^{\prime}]{\kern 1pt}^{l'} - \beta_{1210} [{\text{M}}][{\text{L}}]{\kern 1pt}^{2} [{\text{L}}^{\prime}] - \beta_{1111} [{\text{M}}][{\text{L}}]{\kern 1pt} [{\text{L}}^{\prime}][{\text{H}}]\, - 2C_{{{\text{O}}_{2} }}$$ $$f_{2} = C_{\text{L}} - Y_{1} [{\text{L]}} - l\sum\limits_{l = 1}^{ 2} {\sum\limits_{h = 0}^{1} {\beta_{mlh} } } \,[{\text{M}}][{\text{L}}]{\kern 1pt}^{l} [{\text{H}}]{\kern 1pt}^{h} - 2\beta_{1210} [{\text{M}}][{\text{L}}]{\kern 1pt}^{2} [{\text{L}}^{\prime}] - \beta_{1111} [{\text{M}}][{\text{L}}]{\kern 1pt} [{\text{L}}^{\prime}][{\text{H}}] - 4C_{{{\text{O}}_{2} }}$$ $$f_{3} = C_{\text{L}^{\prime}} - Y_{2} [{\text{L}^{\prime}]} - l'\sum\limits_{l' = 1}^{5} {\beta_{ml'} [{\text{M]}}} \,[{\text{L}^{\prime}]}{\kern 1pt}^{l'} - \beta_{1210} [{\text{M}}][{\text{L}}]{\kern 1pt}^{2} [{\text{L}^{\prime}]} - \beta_{1111} [{\text{M}}][{\text{L}}]{\kern 1pt} [{\text{L}^{\prime}][H]} - 2C_{{{\text{O}}_{2} }}$$ where: C M—total concentration of the metal: Co(II), C L—total concentration of the l-α-amino acid, \(C_{{L^{\prime}}}\)—total concentration of imidazole, \(C_{{{\text{O}}_{2} }}\)—concentration of the oxygen adduct, β ml —summary stability constants of the Co(II)–l-α-amino acid complexes, β ml' —summary stability constants of the Co(II)–imidazole complexes, β mll' —summary stability constants of the mixed Co(II)– l-α-alanine/asparagine–imidazole complexes, β 1210, β 1111—summary stability constants of the mixed Co(II)–l-α-histidine–imidazole complexes. The hydrolyzed Co(II) aqua-ion and the protonated (not complexed) ligand forms were considered in expressions: $$\begin{aligned} Y &= 1 + \left( {{\text{1}}/K_{{{\text{OH}}}} [{\text{H}}]} \right) \\Y_{1} &= 1 + \beta _{{{\text{LH}}}} [{\text{H}}] + \beta _{{{\text{LH2}}}} [{\text{H}}]^{{\text{2}}} \,&{\text{for}} \,{\textsc{{l}}}\text{-}\alpha{\text{-alanine and}}\,{\textsc{{l}}}\text{-}\alpha {\text{-asparagine}};\hfill \\ {\text{Y}}_{{\text{1}}}&= {\text{ 1 }} + \beta _{{{\text{LH}}}} [{\text{H}}] + \beta _{{{\text{LH2}}}} [{\text{H}}]^{{\text{2}}} + \beta _{{{\text{LH3}}}} [{\text{H}}]^{{\text{3}}}\;&{\text{for}}\, {\textsc{{l}}}\text{-}\alpha\text{-histidine} \hfill\\ Y_{2} &= 1 + \beta _{{{\text{L}}^{\prime } {\text{H}}}} \left[ {\text{H}} \right] \hfill\end{aligned}$$ where: K OH—hydrolysis constant of tshe Co(II) aqua-ion = 10−9.8 [50], β LH, β LH2, β LH3—summary (overall) protonation constants of the l-α-amino acid, β L'H—protonation constant of imidazole. It is noteworthy that solving the nonlinear equation system at very erroneous initial estimations may lead to quite different results or lack of convergence. However, in the case of the systems under study, the solutions [M], [L] and [L′] are not allowed to be negative numbers and they should be found within the limits of zero and the total concentrations C M, C L, C L′. This makes it possible to reject the solutions without a chemical meaning. The used summary protonation constants of l-α-amino acids and imidazole, the stability constants of the primary Co(II)–amac, Co(II)–Himid complexes, as well as the stability constants of the heteroligand Co(II)–l-α-amino acid–imidazole complexes have been determined previously in the same medium and the same ionic strength as in the present work (KNO3, I = 0.5) [37, 45]. The only different parameter was the temperature: 25.0 °C, instead of 0–1 °C. The lack of data due to the lower temperature is usually caused by lowered sensibility of the glass electrodes. Nevertheless, the systematic error of the stability constants recently used could be estimated on the basis of corresponding literature data as 0.1–0.2 in logarithm [51]. The obtained equilibrium concentrations [M], [L], [L′] were needed to calculate the \(K_{{{\text{O}}_{2} }}\) constant. In the present reaction scheme, the first step corresponded to formation of the "active" complexes: $${\text{Co}}\left( {\text{imid}} \right)_{ 2} + {\text{ 2 Hamac }} + {\text{ H}}_{ 2} {\text{O}} \to {\text{Co}}\left( {\text{amac}} \right)_{ 2} \left( {\text{Himid}} \right)\left( {{\text{H}}_{ 2} {\text{O}}} \right) \, + {\text{ Himid}}$$ Consecutively the "active" complex takes up dioxygen by forming the dimeric oxygen adduct: $$2 {\text{ Co}}\left( {\text{amac}} \right)_{ 2} \left( {\text{Himid}} \right)\left( {{\text{H}}_{ 2} {\text{O}}} \right) \, + {\text{ O}}_{ 2} \to \, \left[ {{\text{Co}}\left( {\text{amac}} \right)_{ 2} \left( {\text{Himid}} \right)} \right]_{ 2} {\text{O}}_{ 2}^{ 2- } + {\text{ 2 H}}_{ 2} {\text{O}}$$ By treating the O2 uptake as a reversible reaction: the equilibrium constant may be calculated from the formula: $$K_{{{\text{O}}_{2} }} = \frac{{[{\text{O}}_{2} \,{\text{adduct}}]}}{{[ \, {\text{"active"}}{\text{ complex}}]^{2} [{\text{O}}_{2} ]}}$$ where [O2 adduct]—equilibrium concentration of this part of the oxygen adduct, in which dioxygen was bound reversibly. The value was found by using the percentage of reversibility of O2 uptake, that is to say by rejecting the part of O2 adduct, in which the metal undergoes irreversible oxidation to Co(III) during the experiment. The equilibrium [O2] concentration was calculated on the basis of table data of dioxygen solubility in water [52]. According to Henry's law, if the experiment proceeds at the same temperature but at decreased pressure, the volume of gas dissolved in water (or in a diluted solution) is proportionally lower. Under the experimental conditions we have: $$V_{{{\text{O}}_{ 2} }} = V_{\text{g}} \cdot f = V_{\text{g}} \cdot p_{{{\text{O}}_{ 2} }} / 7 60$$ where: V g = 0,04758 mL—table value of dioxygen solubility in 1 L of water, at temperature 1 °C under normal pressure 1013 × 105 Pa. \(p_{{{\text{O}}_{ 2} }}\)—partial pressure of dioxygen in the gas burette. The \(V_{{{\text{O}}_{ 2} }}\) value gives the [O2] concentration after adjustment to the number of mmoles of O2 dissolved in 1 L of the solution. imid: Himid: amac: amino acid (α-amine group non-protonated, carboxyl group deprotonated) Hamac: amino acid (α-amine group protonated, carboxyl group deprotonated) Tiné MR (2012) Cobalt complexes in aqueous solutions as dioxygen carriers. Coord Chem Rev 256:316–327 Zhang X, Yue F, Li H, Huang Y, Zhang Y, Wen H, Wang J (2016) Reversible oxygenation of -amino acid–cobalt(II) complexes. Bioinorg Chem Appl 2016:3585781 Cheng X, Huang Y, Li H, Yue F, Wen H, Wang J (2016) Reversible oxygenation of 2,4-diaminobutanoic acid-Co(II) complexes. Bioinorg Chem Appl 2016:8296365 Yue F, Song N, Huang Y, Wang J, Xie Z, Lei H, Zhang X, Fu P, Tao R, Chen X, Shi M (2013) Reversible oxygenation of bis[β-(2-pyridyl)-α-alaninato]Co(II) complex in aqueous solution at room temperature. Inorg Chim Acta 398:141–146 Loew G (2006) Electronic structure of heme sites. In: Solomon EI, Lever ABP (eds) Inorganic electronic structure and spectroscopy, 2nd edn. Wiley Interscience, New York, pp 451–532 Wirstam M, Lippard SJ, Friesner RA (2003) Reversible dioxygen binding to hemerythrin. J Am Chem Soc 13:3980–3987 Momenteau M, Reed CA (1994) Synthetic heme dioxygen complexes. Chem Rev 94:659–698 Stenkamp RE (1994) Dioxygen and hemerythrin. Chem Rev 94:715–726 Magnus KA, Ton-That H, Carpenter JE (1994) Recent structural work on the oxygen transport protein hemocyanin. Chem Rev 94:727–735 Niederhoffer EC, Timmons JH, Martell AE (1984) Thermodynamics of oxygen binding in natural and synthetic dioxygen complexes. Chem Rev 84:137–203 Jones RJ, Summerville DA, Basolo F (1979) Synthetic oxygen carriers related to biological systems. Chem Rev 79:139–179 Klotz IM, Kurtz DM Jr (1984) Binuclear oxygen carriers: hemerythrin. Acc Chem Res 17:16–22 Erskine RW, Field BO (1976) Reversible oxygenation in structure and bonding. Springer-Verlag 28:1–50 Henrici-Olivé G, Olivé S (1974) Die Aktivierung von molekularem Sauerstoff. Angew Chem 86:1–56 Collman JP, Fu L (1999) Synthetic models for hemoglobin and myoglobin. Acc Chem Res 32:455–463 Khatoon Z, Kabir-ud-Din (1989) Potentiometric studies on mixed-ligand complexes of cobalt(II) and nickel(II) with amino acids as primary ligands and imidazole as secondary ligand. Trans Met Chem 14:34–38 Brodsky NR, Nguyen NM, Rowan NS, Storm CB, Butcher RJ, Sinn E (1984) pKa and isomer determinations of cobalt(III) imidazole and histidine complexes by NMR and X-ray crystallography. Inorg Chem 23:891–897 Mishra RK, Thakur BG (2016) Studies on some novel mixed ligand complexes of cobalt(II)-imidazol-amino acids. AIJRFRANS 16–146:39–41 Jeżowska-Trzebiatowska B, Vogt A, Kozłowski H, Jezierski A (1972) New Co(II) complexes, reversibly binding oxygen in aqueous solution. Bull Acad Pol Sci 3:187–192 Jeżowska-Trzebiatowska B (1974) Complex compounds as models of biologically active systems. Pure & Appl Chem 367-390 Vogt A, Kufelnicki A, Jeżowska-Trzebiatowska B (1990) Studies on the cobalt(II)–dipeptide–imidazole system; a new dioxygen carrier. Polyhedron 9:2567–2574 Vogt A, Kufelnicki A, Leśniewska B (1994) The distinctive properties of dioxygen complexes formed in the cobalt(II)–asparagine–OH− systems (in relation to other amino acids and mixed complexes with N-base). Polyhedron 13:1027–1033 Kufelnicki A, Świątek M, Vogt A (1995) Uptake of molecular oxygen by Co(II) chelates with peptides in an aqueous solution. Part IX. Ternary cobalt(II)–istidine containing dipeptide-imidazole systems— effective dioxygen carrires. Polish J Chem 69:206–212 Kufelnicki A, Świątek M (1999) Uptake of molecular oxygen by Co(II) chelates with peptides in an aqueous solution. Part XI. Stereoselective properties of oxygenated diastereoisomeric dipeptide systems. Polish J Chem 73:579–592 Kufelnicki A, Pająk M (2003) Dioxygen uptake by ternary complexes cobalt(II)–amino acid–imidazole. Ann Pol Chem Soc 2:467–471 Kufelnicki A, Pająk M (2007) Mass balance in the equilibrium system Co(II)–imidazole–O2. Ann Pol Chem Soc 410–413 Świątek-Tran B, Kołodziej HA, Tran VH, Baenitz M, Vogt A (2003) Magnetism of Co(C3H4N2)2(CO3)(H2O)2. Phys Stat Sol(a) 196:232–235 Baraniak E, Frejman HC, James JM, CE Nockolds (1970) Structural and spectroscopic study of carbonatodiaquobisimidazolecobalt(II), a bidentate carbonato-complex of cobalt(II). J Chem Soc 2558–2566 Das D, Lee YM, Ohkubo K, Nam W, Karlin KD, Fukuzumi S (2013) Temperature-independent catalytic two-electron reduction of dioxygen by ferrocenes with a copper(II) tris[2-(2-pyridyl)ethyl]amine catalyst in the presence of perchloric acid. J Am Chem Soc 135:2825–2834 Fukuzumi S, Tashini L, Lee YM, Ohkubo K, Nam W, Karlin KD (2012) Factors that control catalytic two- vs four-electron reduction of dioxygen by copper complexes. J Am Chem Soc 134:7025–7035 Goifman A, Gun J, Gelman F, Ekeltchik I, Lev O, Donner J, Bornick H, Worch E (2006) Catalytic oxidation of hydrogen sulfide by dioxygen on CoN4 type catalyst. Appl Catal B Environ 63:296–304 Hassanein M, Gerges S, Abdo M, El-Khalafy S (2005) Catalytic activity and stability of anionic and cationic water soluble cobalt(II) tetraarylporphyrin complexes in the oxidation of 2-mercaptoethanol by molecular oxygen. J Mol Catal A Chem 240:22–26 Khandar AA, Nejati K, Rezvani Z (2005) Syntheses, characterization and study of the use of cobalt(II) schiff-base complexes as catalysts for the oxidation of styrene by molecular oxygen. Molecules 10:302–311 Simándi LI, Simándi TM, May Z, Besenyei G (2003) Catalytic activation of dioxygen by oximatocobalt(II) and oximatoiron(II) complexes for catecholase-mimetic oxidations of o-substituted phenols. Coord Chem Rev 245:85–93 Mäcke HR, Williams AF (1988) Dioxygen-transition metal complexes in Fox MA, Chanon M Photoinduced electron transfer. Part D. Photoinduced electron transfer reactions: inorganic substrates and applications. Elsevier Science Publishers BV, New York Ochiai EI (1973) Oxygenation of cobalt(II) complexes. J Inorg Nucl Chem 35:3375–3389 Woźniczka M, Vogt A, Kufelnicki A (2016) Equilibria in cobalt(II)–amino acid–imidazole system under oxygen-free conditions: effect of side groups on mixed-ligand systems with selected l-α-amino acids. Chem Centr J 10:14 Vogt A (1980) Ph.D. Thesis. Institute of Chemistry University of Wrocław Stadherr LG, Martin RB (1973) Stereoselectivity in dipeptide complexes of Cobalt(III). Inorg Chem 12:1810–1814 McLendon G, Martell AE (1976) Inorganic oxygen carriers as models for biological systems. Coord Chem Rev 19:1–39 Yatsimirskii KB, Nemoskalenko VV, Aleshin VG, Bratushko YuI, Moiseenko EP (1977) X-ray photoelectron spectra of mixed oxygenated cobalt(II)–amino acid–imidazole complexes. Chem Phys Lett 52:481–484 Bagger S, Gibson K (1972) Reaction of molecular oxygen with mixed cobalt(II) complexes containing (S)–alanine and heterocyclic nitrogen bases. Acta Chem Scand 26:2972–2974 Kiss T (1990) Complexes of amino acids. In: Burger K (ed) Biocoordination chemistry: coordination equilibria in biologically active systems. Ellis Horwood Ltd, Chichester, pp 56–134 Lever ABP (1984) Inorganic electronic spectroscopy, 2nd edn. Elsevier, Amsterdam, pp 285–296 Woźniczka M, Pająk M, Vogt A, Kufelnicki A (2006) Equilibria in cobalt(II)–amino acid–imidazole system under oxygen-free conditions. Part I. Studies on mixed ligand systems with l-α-alanine. Polish J Chem 80:1959–1966 Mathcad 13 (2005) User's Guide. Mathsoft Engineering & Education, Inc Cambridge Meloun M, Havel J, Högfeldt E (1988) Computation of solution equilibria. Ellis Horwood Ltd, Chichester Leggett DJ (1985) The determination of formation constants: an overview of computational methods for data processing. In: Leggett DJ (ed) Computational methods for the determination of formation constants. Plenum Press, New York Fallab S (1967) Reactions with molecular oxygen. Angew Chem 79:500–511 Baes CF Jr, Mesmer RE (1976) The hydrolysis of cations, New York Kiss T (1997) The temperature dependence of stability constants in Pettit L, Powell K Stability constants database SC-database for Windows, Appendix 11. IUPAC, Academic Software Küster FW, von Thiel bearbeitet A, Ruland A (1985) Rechentafeln für die Chemische Analytik. Walter de Gruyter, Berlin All authors contributed equally in the development of the manuscript. MP carried out the potentiometric and volumetric measurements as well as calculations, participated in the Results and Discussion. MW participated in the UV/Vis studies and in the Results and Discussion. AV provided the polymeric complex and participated in the Discussion. AK suggested the research idea, participated in the "Results and discussion" and coordinated the final formulation. All authors read and approved the final manuscript. Financial support of this work by the Medical University of Łódź (Statute Fund No. 503/3-014-02/503-31-001—A. Kufelnicki) is gratefully acknowledged. Department of Physical and Biocoordination Chemistry, Faculty of Pharmacy, Medical University of Łódź, Muszyńskiego 1, 90-151, Lodz, Poland Marek Pająk, Magdalena Woźniczka & Aleksander Kufelnicki Faculty of Chemistry, University of Wrocław, F. Joliot-Curie 14, 50-383, Wrocław, Poland Andrzej Vogt Marek Pająk Magdalena Woźniczka Aleksander Kufelnicki Correspondence to Aleksander Kufelnicki. Additional file 1: Figure S1. Determination of stoichiometry of the O2 uptake by the molar ratio method in the Co(II) – amac – Himid – O2 systems. L-α-amino acid (amac) = (a) alanine, (b) asparagine, (c) histidine. C L – total concentration of amac, C Co – total concentration of Co(II). Mmole O2 – number of mmol of dioxygen taken up. All the samples contained 0.3 mmol of Co(imid)2 in 30 mL of solution. Titration curve of the water-acetone filtrate obtained when the dioxygen adduct formed in aqueous solution precipitated in acetone. Co(II) : amac : Himid at a molar ratio of 0.3 : 0.9 : 0.3 (mmol); L-α-amino acid (amac) = (a) alanine, (b) asparagine, (c) histidine. Laboratory set for pehametric – volumetric measurements. Laboratory set for determination of imidazole released from the coordination sphereof Co(II): (a) initial preparation, (b) collection of the filtrate. Pająk, M., Woźniczka, M., Vogt, A. et al. Reversible uptake of molecular oxygen by heteroligand Co(II)–l-α-amino acid–imidazole systems: equilibrium models at full mass balance. Chemistry Central Journal 11, 90 (2017). https://doi.org/10.1186/s13065-017-0319-8 Co(II) l-α-amino acid Dioxygen Oxygen complex \(K_{{{\text{O}}_{2} }}\) equilibrium constant Mass balance Submission enquiries: [email protected]
CommonCrawl
Graduate Student Seminar UC Riverside Department of Mathematics Fridays 1–2pm in Surge 284 Joe Moeller Dylan Noack Mike Pierce Scheduled Talks, Spring 2018 1 June 2018 Constructing Arithmetic Hyperbolic Surfaces Jonathan Alcaraz In first-year Topology, we construct the so-called "Flat Torus" as the quotient of $2$-dimensional Euclidean space by integer linear combinations of the standard basis. This is used as an example for other topics in topology. In this talk, we will look at the abstract properties of this construction and apply them to hyperbolic space. 25 May 2018 Kähler-Einstein metrics on compact cohomogeneity one Fano manifolds via effective approximations Pilar Orellana Kähler-Einstein metrics emerge when a complex, topological manifold, under additional conditions, admits a metric that is both Einstein and Kähler. They are beautiful objects which arise naturally in many facets of mathematics—and moreover, are of great importance in the study of string theory. We want to determine under what conditions a compact Fano manifold of Type I cohomogeneity one admits Kähler-Einstein metrics; for which is done by verifying the classes of the manifolds being Fano manifolds and their stability; however, by using the standard methods currently available to us, this proves to be quite a cumbersome task which yields very limited results. In order to overcome this obstacle, we have developed new specialized methods which are effective at retrieving large-scale information of classes of these compact Fano manifolds and their corresponding Kähler-Einstein properties. 18 May 2018 The Eckmann-Hilton Argument and Some Applications Alex Pokorny There is a standard Munkres exercise assigned in 205A which asks to show that the fundamental group of a topological group is abelian. If you venture deeper into algebraic topology, you will stumble across a seemingly unrelated statement: that the higher homotopy groups of a topological space are all abelian. In this talk, I will prove the above statements and explore this idea of proving that a given operation is abelian using the Eckmann-Hilton argument. This argument is simple to prove, yet yields deep results. If time permits, I will generalize the definition of the center of a group and talk about $2$-categories. 11 May 2018 Schur-Weyl duality and twisted commutative algebras Derek Lowenberg Schur-Weyl duality describes the link between the representation theories of the symmetric groups and the general linear groups. In this talk, I'll tell you what it is and how it gives useful equivalences of certain symmetric monoidal categories. Following Sam and Snowden, one can define algebras (and their modules) as objects in such categories, which they call twisted commutative algebras. These in turn are used to study behaviors of families of symmetric and general linear group representations, and so the game continues. 4 May 2018 Categorical Computation — Form and Content Christian Williams There is a duality of syntax and semantics – the form of a theory and the content of a model. This is a fundamental idea in category theory, which was introduced by William Lawvere in his 1963 PhD thesis. The notion of Lawvere theory provides an understanding of algebraic structures independent of presentation, improving upon the set-theoretic universal algebra. Soon after, these theories were proven equivalent to monads, the categorical manifestation of duality, through which the algebras of the monad correspond to models of the theory. Theories and monads provide complementary perspectives of algebraic structures, and both are becoming important to theoretical and practical computer science. We discuss the application to distributed computation, where enriched Lawvere theories can be used to create languages, programs, and data structures which have their operational semantics—the ways they can operate in context—integrated into their definition, effecting sound design of software. 27 April 2018 Fractals and Finite Approximations with Respect to Noncommutative Metrics Therese-Marie Landry How can fractals be understood from the perspective of noncommutative geometry? Noncommutative geometry analyzes a space by studying the algebra of functions on that space. One of the fundamental tools of noncommutative geometry is Connes' spectral triple. Via the efforts of Lapidus and his collaborators, there exist spectral triples for the Sierpinski gasket that recover the geodesic metric and encode some of its fractal qualities. Building on the work of Rieffel, Latrémolière introduced a generalization of the Gromov-Hausdorff distance to noncommutative, or quantum, compact metric spaces. Together with Aguilar, Latrémolière applied this new technique in noncommutative geometry—the Gromov-Hausdorff propinquity—to the space of continuous complex valued functions on the Cantor set. I am currently working on using the Gromov-Hausdorff propinquity to write the function space for the Sierpinski gasket as a limit of finite-dimensional $C^*$-algebras. In the process, I hope to understand which other fractals can be finitely approximated by noncommutative means. 20 April 2017 What is condensed matter and why does it matter? Amir M-Aghaei The physics of a strongly interacting system—condensed matter—is usually drastically different than that of its building blocks; this is known as emergence. In this talk, I introduce the physics of condensed matter starting with a brief survey of how methods of statistical physics can explain some familiar but complicated phenomena around us. In particular, I will describe the physics of liquid-gas transition and discuss different aspects of an old question: why some materials conduct? Finally, I will mention the recent efforts of manipulating emergent physics to build quantum computers. 13 April 2018 Open Petri Nets and the Reachability Problem Jade Master In computer science Petri nets are diagrams which are used to represent the transfer of resources in complex interacting systems of agents. These systems don't usually exist in isolation and instead have inputs and outputs corresponding to external or environmental factors. To model this interconnectedness we define open Petri nets; Petri nets which can be glued together along specified inputs and outputs. We form a category of open Petri nets with open Petri nets as morphisms between their sets of inputs and outputs. Computer scientists are often interested in which states of a Petri net are reachable from a given initial state. We will put the category of open Petri nets to use by constructing reachability as a pseudo functor from the category of open Petri nets to the category of relations. 6 April 2018 Model Theory and the Ax-Grothendieck Theorem Model theory, from the perspective that I'll be talking about today, is the study of algebraic structures using ideas of pure logic. Or as logician Wilfrid Hodges said, model theory is algebraic geometry minus the fields. In this talk I'll start with a brief introduction to model theory, talk about completeness and compactness, and develop some facts about the theory of algebraically closed fields. Then if all goes well, this will culminate in a fantastic proof of the Ax-Grothendieck theorem, that every injective polynomial function $\boldsymbol{C}^n \to \boldsymbol{C}^n$ is surjective. 16 March 2018 On the structure of complete open Kähler manifolds of positive curvature James Ogaja A central problem in complex geometry is to generalize the classical uniformization theorems on Riemann surfaces to higher dimension. In Kähler geometry, attention has been centered on how curvature affects the holomorphic structure of a Kähler manifold. In this talk I'll discuss results related to Yau's uniformization conjecture. 9 March 2018 Some Combinatorial Representation Theory Combinatorics is an interesting topic on its own, but is also a very useful tool throughout mathematics. Due to the nature of the subject, combinatorics is extremely prevalent in representation theory, whether it's classifying all finite dimensional irreducible representations, or decomposing representations into irreducible pieces. I will discuss a combinatorial rule, originally called the "Littlewood-Richardson Rule" for decomposing tensor products of two irreducible representations for the Lie algebra $\mathfrak{sl}_{n+1}$. This uses some interesting combinatorics of partitions of natural numbers. Lastly, I will discuss how I am using this rule to find the decomposition into irreducible representations of certain representations of $\mathfrak{sl}_{n+1}$ coming from a family of prime representations of quantum affine $\mathfrak{sl}_{n+1}$ recently defined by Brito and Chari. 2 March 2018 Toric geometry Ethan Kowalenko A torus in normal everyday life is a product of circles, but in algebraic geometry a torus is a variety isomorphic to a product of $\mathbb{C}^\ast$'s. A toric variety $V$ is a variety with a dense open subset isomorphic to torus, such that multiplication in the torus extends to a group action on $V$. Recently, I've been looking at toric varieties with singularities, and blowing these singular points up (unrelated to Dylan's talk) to get an overall smooth variety. The theory of toric varieties is actually very nice, with the ability to get almost any information you want about them via lattices and cones. In this talk, I'll compute some examples of toric varieties, show how to glue affine pieces together, and maybe also compute how to resolve a singular point. 12:30 – 1:30pm Surge 268 Conference Travel Grants and You: Getting the Money You Need Jose Manuel Madrano, Conference Grant Coordinator We all want to travel and make the connections, both to further our studies and to land that job after graduating. Being a grad student is certainly does not make that easy, but the GSA has money to help. In this talk you can ask the GSA officer in charge of these funds any questions you might have about how to qualify for this money, and how to apply! 16 February 2018 Towards quantifying fractality Xander Henderson While the term fractal is not well defined in mathematics, we generally understand the term to refer to a set that possess "roughness" or "complexity" at all scales. This complexity can be detected and quantified by studying zeta functions associated to the set. In this talk, we will introduce the distance zeta function associated to a bounded subset of a metric space, then discuss several examples of fractal and non-fractal sets. 9 February 2018 Integrability, the singular manifold method and Darboux transformations: an algorithmic procedure to determine solutions. Paz Albares The Painlevé property has been proved to be a powerful test for identifying the integrability as well as a good basis for the determination of many properties of a given (nonlinear) PDE. The singular manifold method, based on the Painlevé analysis, provides the Lax pair and the Bäcklund transformation for the PDE. Furthermore, by employing the Darboux transformation approach, an iterative algorithmic method to obtain recursive solutions from a basic seed solution can be constructed. It will be illustrated by means of some examples, related to Nonlinear Schrödinger equations, in which solutions such as solitons, lumps and rogue waves will be thoroughly discussed. 2 February 2018 Gauge Invariance and Charge Conservation Michael McNulty It is of no doubt to us mathematicians that mathematical abstraction is indispensable in our field of study. Yet when we consider mathematical applications to understanding the physical world, to what extent is it useful to separate from the seemingly concrete? In this talk, we will explore the concepts of gauge invariance and the conservation of electric charge through an abstracted lens; the former being a concept whose rich structure is familiar to those undergraduate students of physics who dug deeper than the typical classroom while the latter is an assertion familiar to most high school students. We will see how, given a simple mathematical framework, the phenomena of electromagnetism emerges nearly out of thin air and is completely self-contained within the initial framework. Our level of generality will lend itself toward viewing mathematical abstraction in applications to physics as not just useful but of utmost importance and as a crucial tool for the serious practitioner. 26 January 2018 Blowing Things Up with Pinchuk and Frankel In the complex plane there are a grand total of two simply connected domains: the plane itself and the ball (up to biholomorphism). This amazing result, known as the Riemann Mapping Theorem, has unfortunately proven not to be true in higher dimensions. Thus began the century-long journey to classify simply connected domains in higher dimensional complex space. A plethora of techniques have been developed in that time, and one such technique is the method of rescaling. There are two classic methods, Frankel rescaling and Pinchuk rescaling, each with its own strengths and weaknesses. 19 January 2018 The Philosophical Science of Logic This week, I will try to explain the wild idea which led me to pursue mathematics. The introductory talk will serve to foster discussion, and hopefully some real interest. The topic cannot be summarized in an hour, let alone a paragraph. I will not defend a theory, but rather encourage a different way of thinking. Even in the best conditions the subject is extremely subtle and difficult, so I ask that you please come with an open mind. I will challenge basic assumptions, make provocative claims, and speak about something that is frankly still out of my cognitive league - so, a foundation of mutual respect is essential. If the hour can be free of pretense, prejudice, and preconception, we will be, as far as I know, the only people on earth thinking about this fascinating idea. This is a dream to which I am devoting my whole life, and I am excited to share it with you. Thank you for reading, and I hope to see you there. Scheduled Talks, Fall 2017 8 December 2017 The Decay Lemma and Applications Matthew Overduin In the paper titled Decay Properties of Axially Symmetric D-Solutions to the Steady Navier-Stokes Equations, it is claimed that if \begin{equation} \int\limits_{\boldsymbol{R}^3} r^{e_1} \left|f(r,z)\right|^2 \,\mathrm{d}x \leq C \quad\quad \int\limits_{\boldsymbol{R}^3} r^{e_2} \left|\nabla f(r,z)\right|^2 \,\mathrm{d}x \leq C \quad\quad \int\limits_{\boldsymbol{R}^3} r^{e_3} \left|\nabla \partial_z f(r,z)\right|^2 \,\mathrm{d}x \leq C \end{equation} with nonnegative constants $e_1$ , $e_2$ , $e_3$, Then for any $r$ greater than zero we have, \begin{equation} \int\limits_{-\infty}^{\infty} \left|f(r,z)\right|^2 \,\mathrm{d}z \leq Cr^{-\frac{1}{2}(e_1+e_2)-1} \quad \int\limits_{-\infty}^{\infty} \left|\partial_z f(r,z)\right|^2 \,\mathrm{d}z \leq Cr^{-\frac{1}{2}(e_2+e_3)-1} \quad \left|f(r,z)\right|^2 \leq Cr^{-\frac{1}{4}(e_1+2e_2+e_3)-1} \,. \end{equation} While the paper outlines a proof for this lemma, the purpose of this talk is to fill in the gaps of this proof and to show how these estimates are obtained. We will also discuss how this lemma is relevant to solving the Axially Symmetric Navier-Stokes equation as a whole, and other types of equations. 1 December 2017 Deformations and nonnegative curvature Lawrence Mouillé In Riemannian geometry, a natural question to ask is "what manifolds admit nonnegative or positive curvature?" This question has lead to interest in deformations of Riemannian metrics and collapse (convergence to a lower dimensional space) of Riemannian manifolds. I will discuss some general results in this area and describe a particular deformation due to Jeff Cheeger in the context of manifolds with isometric group actions. 17 November 2017 Analysis on Manifolds via Li-Yau Gradient Estimates Xavier Ramos Olivé We are always told that the motivation for defining a smooth structure on a manifold is to be able to do calculus and analysis on manifolds. But how exactly is this done, and why? Will analysis give us information about our manifold? In this talk we will see how to define some natural differential equations on Riemannian manifolds, and how studying their solutions we can get topological information of the underlying manifold. We will do this via an example: by studying the so called Li-Yau gradient estimates of the heat kernel, with a particular focus to their relationship to the Ricci curvature. These estimates can be used to derive some bounds on the Betti numbers of the manifold. If time permits, we will explore some different strategies to derive the gradient estimate under different curvature assumptions, although to protect our sanity, we will skip all the messy computations. No previous knowledge about the concept of curvature will be required for the talk. 3 November 2017 Covariance Computations for the Active Subspace Method Applied to a Wind Model Jolene Britton The method of active subspaces is an effective means for reducing the dimensions of a multivariate function $f$. This method enables experiments and simulations that would otherwise be too computationally expensive due to the high-dimensionality of $f$. By using a covariance matrix composed of the gradients of $f,$ one can find the directions in which $f$ varies most strongly, i.e. the active subspace. The current standard for estimating these covariance matrices is the Monte Carlo estimator. Due to the slow convergence of Monte Carlo methods, we propose alternative algorithmic approaches. The first utilizes a separated representation of $f,$ while the second uses polynomial chaos expansions. Such representations have well-defined sampling strategies and allow for the analytic computation of entries of the covariance matrix. Experimental results demonstrate how the Monte Carlo methods compare to our proposed alternative approaches as applied to a function representing power output of a wind turbine. 27 October 2017 Better Faster Strongly Jacobson Modules Tim McEldowney Inventing new math is hard. However, there is a nice work around. Take old math and add an adjective. In this talk, I will build up to my most recent result which looks suspiciously like another theorem. I will start by talking about the base structures I study called '$G$-domains' which are integral domains which are close to being their field of fractions. Next, I will define '$G$-ideals' and 'Hilbert rings' which are made from these $G$-domains with some clear examples of these structures from common rings. Afterwards, I talk about the 'strongly' adjective and what that does to these objects. Lastly, I close with a game I like to call pin the adjective on your adviser's theorem. 20 October 2017 Network Models A network is a complex of interacting systems which can often be represented as a graph equipped with extra structure. Networks can be combined in many ways, including by overlaying one on top of the other or sitting one next to another. We introduce network models — which are formally a simple kind of lax symmetric monoidal functor — to encode these ways of combining networks. By applying a general construction to network models, we obtain operads for the design of complex networked systems. 13 October 2017 Can a nice variety of variety exist? Algebraic geometry is notorious for being difficult, as it is broadly the study of the zero sets of polynomials through some assigned rings. I will attempt to describe a very computable class of such zero sets, called toric varieties, by showing literal computations. Like, explicitly. 6 October 2017 An Introduction to Hopf Algebras Dane Lawhorne What happens when you take the commutative diagrams that define an algebra and reverse all the arrows? The result is called a coalgebra, and with a few more axioms, you get a Hopf Algebra. In this talk, we will examine the role of Hopf algebras in representation theory. In particular, we will see that the category of left modules over a Hopf algebra has both tensor products and dual modules. Talks from Previous Years 2016–2017 2015–2016 2014–2015 2013–2014 2012–2013 UCR Math Terms © 2018 GSS Organizers
CommonCrawl
Atomic cobalt as an efficient electrocatalyst in sulfur cathodes for superior room-temperature sodium-sulfur batteries Bin-Wei Zhang ORCID: orcid.org/0000-0002-7023-59861, Tian Sheng ORCID: orcid.org/0000-0001-5711-30122, Yun-Dan Liu3, Yun-Xiao Wang ORCID: orcid.org/0000-0003-1704-08291, Lei Zhang1, Wei-Hong Lai1, Li Wang1, Jianping Yang4, Qin-Fen Gu5, Shu-Lei Chou ORCID: orcid.org/0000-0003-1155-60821, Hua-Kun Liu ORCID: orcid.org/0000-0002-0253-647X1 & Shi-Xue Dou ORCID: orcid.org/0000-0003-3824-76931 Electrocatalysis The low-cost room-temperature sodium-sulfur battery system is arousing extensive interest owing to its promise for large-scale applications. Although significant efforts have been made, resolving low sulfur reaction activity and severe polysulfide dissolution remains challenging. Here, a sulfur host comprised of atomic cobalt-decorated hollow carbon nanospheres is synthesized to enhance sulfur reactivity and to electrocatalytically reduce polysulfide into the final product, sodium sulfide. The constructed sulfur cathode delivers an initial reversible capacity of 1081 mA h g−1 with 64.7% sulfur utilization rate; significantly, the cell retained a high reversible capacity of 508 mA h g−1 at 100 mA g−1 after 600 cycles. An excellent rate capability is achieved with an average capacity of 220.3 mA h g−1 at the high current density of 5 A g−1. Moreover, the electrocatalytic effects of atomic cobalt are clearly evidenced by operando Raman spectroscopy, synchrotron X-ray diffraction, and density functional theory. Currently, lithium-ion batteries (LIBs) play a dominant role in battery technologies for portable electronics because of their high capacity, high energy density, and reliable efficiency1,2. On the other hand, new emerging applications, such as electric vehicles and large-scale grids, require battery technologies with low costs and long cycle life3,4,5,6. Lithium-sulfur (Li/S) batteries have attracted intense attention due to high theoretical specific energy, environmental benignity, and the low cost and abundance of sulfur7,8,9. Due to efforts over decades, exciting progress on Li-S batteries has been achieved in terms of high capacity, prolonged service life, and remarkable rate capability, which are rapidly bringing this system near delivery to market. Meanwhile, it should be noted that the battery systems based on Li-ion storage are not suitable for large-scale applications, due to the high cost and insufficiency of Li resources10,11. Therefore, increasing interest is currently transferring to batteries based on low-cost and abundant sodium12,13. Room-temperature sodium-sulfur (RT-Na/S) batteries are among the ideal candidates to meet the scale and cost requirements of the market due to overwhelming advantages: a theoretical capacity of S (1672 mA h g−1), low cost, nontoxicity and resource abundance14,15. Nevertheless, RT-Na/S batteries, which share a similar reaction mechanism to the Li/S batteries, are facing critical problems with respect to low reversible capacity and fast capacity fade16,17. The poor conductivity of sulfur and sluggish reactivity of sulfur with sodium, resulting in a low utilization rate of sulfur and incomplete reduction to Na2Sx (x ≥ 2) rather than complete reduction to Na2S, are the main reasons for low accessible capacity. In addition, fast capacity fade during the charge−discharge progress occurs due to the dissolution of long-chain polysulfides in the electrolyte, which also leads to the rapid loss of active materials. Hence, effective materials design is the primary factor that is expected to improve the conductivity and activity of sulfur, and prevent the dissolution of polysulfides. So far, the reported sulfur hosts (for example, hollow carbon spheres15, microporous carbon polyhedron sulfur composite18, and conducting polymer19) could exhibit decent enhancement, but a huge leap is needed to reach the standard of practical applications. To the best of our knowledge, the best rate capacity and longest cycling stability for RT-Na/S batteries are observed in those containing the sulfur@interconnected mesoporous carbon hollow nanospheres (S@iMCHS) (127 mA h g−1 at 5 A g−1)20 and C-S polyacrylonitrile (c-PANS) (150 mA h g−1 after 500 cycles at 220 mA g−1)21, respectively. It is obvious that the sulfur cathodes based on traditional carbonaceous host materials are not capable of meeting the practical targets for large-scale RT-Na/S batteries. Recently, novel sulfur hosts with inherent polarization, such as metallic oxides22 and metal sulfides23, have been investigated in Li/S cells. Compared with bare carbon materials, these polarized host materials have strong intrinsic sulfiphilic property, which are able to impede polysulfide dissolution due to the strong chemical interactions between the polar host materials and the polysulfides. A similar concept has been demonstrated in RT-Na/S batteries; Cu nanoparticles loaded in mesoporous carbon are utilized to immobilize the sulfur and polysulfides24; a novel Cu foam current collector is able to activate sulfur electroactivity as well25. Furthermore, atomic-scale metal materials, including single-atom metals and metal clusters, in general, not only possess amazing electronic and reactive properties, but also could reach the maximum atomic utilization26,27,28,29,30,31. It is rational but very challenging to introduce novel atomic metals into a sulfur host, which is expected to maximize the multifunctions of a polarized sulfur host and achieve extraordinary performance for RT-Na/S batteries. Here, we successfully synthesized a highly effective sulfur host with atomic Co (including SA Co and Co clusters) supported in micropores of hollow carbon (HC) nanospheres. The HC nanospheres are employed as ideal frameworks, which could allow initial anchoring of Co nanoparticles and subsequent S encapsulation. In each HC reactor, it is interesting that the diffusion of sulfur molecules can serve as traction for atomic Co (Con) migration into carbon shells, forming a novel Con-HC host. A sulfur composite, sulfur encapsulated in a Con-HC host (S@Con-HC), is prepared by simply tuning the reaction temperature. When applied in RT-Na/S batteries, the S@Con-HC cathode exhibits outstanding electrochemical performance, which suggests that the maximized atomic utilization could optimize the multiple functions of Co metal towards enhancing sulfur conductivity, activating sulfur reactivity, and immobilizing sulfur and polysulfides. More specifically, the S@Con-HC achieves remarkable cycling stability (507 mA h g−1 after 600 cycles at 100 mA g−1) and rate performance (220.3 mA h g−1 at 5 A g−1). A deep insight into the mechanism has also been obtained by cyclic voltammetry (CV), operando Raman spectroscopy, synchrotron X-ray diffraction (XRD), and density functional theory (DFT), confirming that atomic Co could alleviate the "shuttle effect" and also effectively electrocatalyze the reduction from Na2S4 into the final product Na2S. Growth process for sulfur-hosted atomic cobalt-decorated hollow carbon composite The synthetic process of the S@Con-HC is illustrated in Fig. 1. The successful encapsulation of Co nanoparticles (NPs, ~3 nm) and S is attributed to the microporous and hollow structure of carbon spheres. Initially, a CoCl2 solution was immersed into the HC spheres and was reduced to Co NPs that uniformly decorated the carbon shells (~5 nm) of HC nanospheres (Co-HC) by controlled thermal treatment method (Supplementary Figs. 1, 2). The interactions between Co and S occur in two stages, along with increasing temperature. Firstly, the melted S was loaded into the Co-HC by a capillarity effect via a facile melt-diffusion strategy at 155 °C for 12 h (with the product denoted as S/Co-HC). It is clear that some of the S agglomerates in the hollow space of carbon spheres and others are dispersed in the carbon shells of the S/Co-HC, as shown using atomic resolution high-angle annular dark field (HAADF) scanning transmission electron microscopy (STEM) images (Supplementary Fig. 3). Subsequently, the S/Co-HC was heat-treated at 300 °C in a sealed quartz ampoule, which interestingly leads to the disappearance of Co nanoparticles and S agglomeration. During this process, S begins to sublime. The concentration gradient results in S diffusion from the inside of the nanospheres to the surface. With sufficient thermal energy for S evaporation, most of the S molecules diffuse into the C shells, which would drive the Co nanoparticles to be re-dispersed into the carbon shells as well. Thus, atomic Co, including Co single atoms and clusters, migrates into the C shells of each HC nanosphere by taking advantage of the diffusion of inner S molecules. Finally, a novel S nanocomposite with S embedded into atomic Co-decorated hollow carbon (S@Con-HC) could be achieved. Schematic illustration of synthesis. Schematic illustration of the synthesis of the hollow carbon decorated with cobalt nanoparticles (Co-HC). After sulfur (S) impregnation, the S/Co-HC is heat treated to generate atomic Co-decorated hollow carbon as a sulfur host material (S@Con-HC) As displayed in Fig. 2 and Supplementary Fig. 4, the scanning electron microscopy (SEM) and transmission electron microscopy (TEM) images of the S@Con-HC demonstrate that the uniform dispersion of hollow carbons without any nanoparticles existed; meanwhile, atomic Co (bright dots) are observed in the C shells. The elemental mapping and line-profile analysis of S@Con-HC demonstrates that this atomic Co is well confined in the carbon shells; meanwhile, most of the S is embedded in the carbon shell along with the dispersion of atomic Co, which implies the simultaneous formation of atomic Co and S dispersion. This is attributed to Co atoms migrating into HC shells with S sublimation via an atom migration strategy based on the strong interaction between Co and S. Hence, most of the S molecules diffuse into the C shells, and are adsorbed by atomic Co. The average size of the atomic Co is calculated to be 0.4 ± 0.2 nm from 200 single atoms and clusters in Supplementary Fig. 4. For comparison, a sample with S loading on plain HC (S@HC), in which the S is evenly dispersed among the carbon shells of HC, was prepared at 300 °C (Supplementary Fig. 5). It should be pointed out that atomic metals are difficult to form in pure carbon materials because of their high energy and instability32. Surprisingly, the atomic Co is successfully introduced into the S@Con-HC composite. Active S, in turn, plays a critical role in forming and stabilizing atomic Co by strong chemical Co−S bonds. In sharp contrast, numerous cubic nanoparticles (~ 10 nm) can be observed in HC prepared at 400 °C (Supplementary Fig. 6). The HAADF-STEM image displays two lattice distances of 1.94 Å and 2.75 Å, which are indexed to the (220) and (200) planes of CoS2, respectively. Elemental mapping of S@CoS2-HC clearly shows the formation of CoS2. The line-profile analysis across the carbon shell in Supplementary Fig. 6c demonstrates that the signal of Co is negligible in the carbon shell. The elemental S mapping results demonstrate that the S is homogeneously dispersed in the CoS2-HC host. Inductively coupled plasma-optical emission spectroscopy (ICP-OES) results demonstrate that the contents of Co are comparable, with weight ratios of 7.53, 7.06, and 6.85% in S/Co-HC, S@Con-HC, and S@CoS2-HC, respectively. Meanwhile, the Co loading ratios (5 and 20% of CoCl2) also have been optimized for S@Co-HC as shown in Supplementary Figs. 7, 8 (details see Supplementary Note 1). Representative electron microscopy images. a Transmission electron microscopy (TEM) image, b, c high-angle annular dark field (HADDF)-scanning tunneling electron microscopy (STEM) images of atomic cobalt-decorated hollow carbon sulfur host (S@Con-HC). Scale bar, 20 nm (a), 10 nm (b) and 2 nm (c). d Line-profile analysis from the area indicated on (b). e–h Elemental mapping of S@Con-HC. Scale bar, 10 nm The thermogravimetric analysis (TGA) results shown in Fig. 3a, Supplementary Figs. 9, 10 indicate that the S contents in S/Co-HC, S@Con-HC, and S@HC are ~48, 47, and 30 wt%, respectively. The low S loading ratio of 30 wt% indicates that atomic Co in HC is favorable to capture S and enhance S loading amount. There are three states of sulfur in S@Con-HC. The crystalline sulfur on the carbon layer would sublime at a relatively low temperature of ~270 °C, which accounts for ~33 wt%. Then, a small amount of amorphous sulfur, confined in the micropores15, would evaporate at temperatures from 270 to 530 °C with a sulfur loss of ~8 wt%; the sulfur encapsulated in the hollow space could finally sublime at a high temperature of 530 °C, which corresponds to a sulfur portion of ~6 wt%. The S@HC sample shows a similar TGA curve, indicating S present in the same states as those of the S@Con-HC; the amorphous sulfur in S@HC is about ~7 wt%. Compared with other Co-based materials, as shown in Supplementary Fig. 10, S in the S@Con-HC is the most difficult to vaporize. The starting temperature of weight loss is 173 °C for S@Con-HC, which is much higher than that of S/Co-HC (155 °C), indicating that the binding between S and Co in S@Con-HC is the strongest20. Interestingly, the S loss commences at 171 °C for S@HC, indicating that the S is firmly embedded into HC after removing the surface S via heat treatment at 300 °C20. This result also indicates that the S in S@Con-HC not only is physically confined in HC frameworks, but also chemisorbed by atomic Co. The S ratio of S@CoS2-HC (~31 wt%) is low because the formation of CoS2 consumes a certain amount of S. XRD patterns of these samples are shown in Fig. 3b and Supplementary Fig. 11; the peaks of S@Con-HC and S@HC are indexed to crystalline sulfur. The low intensity and absence of certain peaks imply that sulfur could be embedded in the Con-HC and HC hosts. CoS2/S-HC has four peaks at 32.5°, 36.36°, 46.54°, and 54.98°, corresponding respectively to the (200), (210), (220), and (311) planes of CoS2 (JCPDF no. 41-4171). Significantly, the XRD results for S/Co-HC and S@Con-HC indicated that S accounted for the dominant component, and the lack of XRD peaks for Co or any CoSx is likely due to the ultrafine and even atomic size of Co; additionally, the wrapping by S of the surface of Co would decrease its signal as well. Thermogravimetric analysis, X-ray diffraction, and X-ray photoelectron spectra. a Thermogravimetry (TG) of hollow carbon hosting sulfur (S@HC) and atomic cobalt-decorated hollow carbon sulfur host (S@Con-HC). b X-ray diffraction (XRD) patterns of sulfur (S) powder, S@HC and S@Con-HC. c S 2p region of X-ray photoelectron spectroscopy (XPS) spectra for S@HC (bottom) and S@Con-HC (top). d Co 2p region of XPS spectrum for S@Con-HC To investigate the interaction between Co and S, X-ray photoelectron spectroscopy (XPS) was carried out. As shown in Fig. 3c and Supplementary Fig. 12, compared with pure S (S 2p3/2,164.0 eV), the S 2p3/2 responses of S@HC and S@Con-HC are shifted at 163.60 and 163.45 eV, respectively. The shift is probably attributable to the adsorption of S by HC33. The lower S 2p3/2 of S@Con-HC could be due to the presence of atomic Co, which is decorated on the carbon shell and could aid HC in immobilizing S by forming Co−S bonds. Interestingly, the S 2p3/2 binding energy of S/Co-HC (165.1 eV) is close to that of CoS2/S-HC (164.90 eV), which indicates that the surface Co nanoparticles of S/Co-HC could be polarized to S2−. To further investigate this hypothesis, we studied the states of Co. The XPS data for S@Con-HC in the Co 2p region in Fig. 3d indicate that Co contributions can be deconvoluted into Co0 (778.70 eV) and Co2+ (781.60 eV). The Co2+ (781.60 eV) in S@Con-HC could be attributed to single Co anchored on S-dispersed hollow carbon34, probably through the formation of a Co−S bond. While, except for single Co atoms, Co clusters exist in S@Con-HC as shown in Fig. 2 and Supplementary Fig. 4. Due to the existence of these Co clusters, XPS data in the Co 2p region for S@Con-HC show evidence of the Co0 state. The binding energy of the Co0 2p3/2 in S@Con-HC is 778.70 eV, which is a shift of 0.5 eV compared with that of pure Co (778.20 eV); this right-shifted binding energy indicates the formation of Co−S bonds between Co clusters and S in S@Con-HC. The XPS spectrum region of Co 2p3/2 for S@CoS2-HC with peaks at 781.10 and 785.80 eV is attributed to Co2+ 2p3/2 and Co4+ 2p3/2, respectively, and the formation of Co−S bonds of CoS235. Since XPS analysis is a surface-sensitive technique, the trend in Co binding energy relies on the size of the Co@CoSx core-shell structure, and that is why the Co oxidation states of S/Co-HC show the highest bonding energy in Supplementary Fig. 12. Based on the TGA, XRD, and XPS results we could draw the conclusion for S@Con-HC that S is not only physically adsorbed by HC, but is also chemisorbed by atomic Co, leading to the formation of Co−S bonds. Meanwhile, the S@Con-HC delivers the Co0 state, which could effectively improve conductivity of an S cathode and enhance the performance of RT-Na/S batteries. Performance evaluation of the room-temperature sodium-sulfur batteries The discharge/charge profiles of the 1st, 2nd, 10th, 50th, 100th, 200th, 300th, 400th, 500th, and 600th cycles at 100 mA g−1 of S@Con-HC and S@HC cathode materials are shown in Fig. 4a, b. The RT-Na/S@Con-HC cell shows two long plateaus that run from 1.68 to 1.04 V, and 1.04 to 0.8 V during the initial discharge process: the high-voltage plateau corresponds to the solid−liquid transition from S to dissolved long-chain polysulfides; and the low-voltage plateau is attributed to the further sodiation of long-chain polysulfides to short-chain sulfides. By contrast, the two plateaus of S@HC are at 1.82 and 1.62 V during the initial discharge process. The lower potential plateaus of S@Con-HC in the initial cycle may be attributed to the complex bonds between Co and S (Co−S bonds), so that additional energy is needed to dissociate S from the Co−S bond, resulting in a more negative potential36,37. Consequently, the following discharge potential plateaus of S@Con-HC shifted to the positive direction38,39. This phenomenon also could be found in S/Co-HC and S@CoS2-HC, as shown in Supplementary Fig. 13. To investigate the effects of slow charge−discharge processes, the S@Con-HC cell at low current densities (20 and 50 mA g−1) were carried out, as shown in Supplementary Fig. 14. It could be clearly seen that the initial reversible capacity of S@Con-HC is 1613 mA h g−1 at 20 mA g−1, which is close to the theoretical capacity of S (1672 mA h g−1), retaining reversible capacity of 945 mA h g−1 after 40 cycles. When tested at 50 mA g−1, the S@Con-HC delivers an initial reversible capacity of 1360 mA h g−1, maintaining 904 mA h g−1 after 40 cycles. During the slow charge−discharge process at current density of 20 mA g−1, the produced long-chain polysulfides could be further fully sodiated to Na2S4. Meanwhile, the atomic Co will effectively alleviate dissolution of Na2S4 and electrocatalytically reduce Na2S4 into the final product Na2S. However, the slow charge−discharge process would aggravate the dissolution and shuttle effect of the long-chain polysulfides, leading to fast capacity decay and inferior capacity retention. This phenomenon is well in agreement with the cycling performance, in which this cathode shows the lowest capacity retention (58.5%) at 20 mA g−1. The comparisons at different currents indicate that the slow charge−discharge process is favorable to realize high reversible capacity but severe capacity decay. It is rational to select a current density that would be slow enough to exert the capacity of all S active materials and fast enough to alleviate the shuttle effect. By contrast, the current density of 100 mA g−1 shows the most satisfactory performance. Meanwhile, the electrochemical performances of different Co loading of S@Co-HC are shown in Supplementary Fig. 15 and Supplementary Note 2, which also demonstrated that the S@Con-HC processes the best performance among these cathode materials. Room-temperature sodium-sulfur battery test. a, b Discharge/charge curves of atomic cobalt-decorated hollow carbon sulfur host (S@Con-HC) and hollow carbon hosting sulfur (S@HC) at 100 mA g−1. c, d Cycling performance and rate performance for S@Con-HC and S@HC. e Comparison of the rate and cycling (inset) capabilities of previously reported room-temperature sodium-sulfur (RT-Na/S) batteries with our work The long-term cycling stability of the S@HC and S@Con-HC cathodes is displayed in Fig. 4c at 100 mA g−1 over 600 cycles. Both S@HC and S@Con-HC display high cycling stability and capacity retention after the initial capacity decay, which indicates that the closed hollow carbon host could effectively manage the fatal polysulfide dissolution. The S@Con-HC delivers an initial reversible capacity of 1081 mA h g−1 with a Coulombic efficiency of 52.1%, retaining excellent reversible capacity of 508 mA h g−1 after 600 cycles. The high initial discharge capacity of S@Con-HC (~2075 mA h g−1) is due to the decomposition of the electrolyte, the side reactions between the carbonate-based solvents and soluble polysulfides, and the formation of the solid electrolyte interphase film25. In sharp contrast, the S@HC cathode delivers the first capacity of 580/1209 mA h g−1, which declines to 271 mA h g−1 after 600 cycles. During the first ten cycles, there is obvious capacity decay for both of the S@Con-HC and S@HC cathodes, which is attributed to the loss of dissolved long-chain polysulfides. The cells show relatively stable cycling but with gradual capacity loss for the subsequent 600 cycles, which mainly originates from the impedance increase in the cells due to the formation of Na2S. This is consistent with the synchrotron XRD results (Fig. 5), confirming that the nonconductive Na2S would accumulate in the cathode during the charge/discharge processes. Significantly, the high accessible capacity of S@Con-HC arises mostly due to the atomic Co decoration that is able to further improve the conductivity and electroactivity of S. To highlight the role of atomic Co, the cycling stability of S/Co-HC and S@CoS2-HC are shown in Supplementary Fig. 16. It is noteworthy that the S/Co-HC displays fast capacity degradation, which shows the initial reversible capacity of 1018/617 mA h g−1, but after 100 cycles, it is only 64/62 mA h g−1. Additionally, the first-cycle reversible capacity of S@CoS2-HC is 610/1415 mA h g−1 respectively; after 200 cycles, it is only 206 mA h g−1. These results demonstrate that the atomic Co possesses stronger electrocatalytic capability than Co nanoparticles and CoS2 nanoparticles. The role of atomic Co towards improving S performance will be discussed in the following section. Characterization of mechanism. a Cyclic voltammograms, and b in situ Raman spectra. c In situ synchrotron X-ray diffraction (XRD) patterns of the room-temperature sodium-sulfur battery comprised of atomic cobalt-decorated hollow carbon sulfur host (RT-Na/S@Con-HC) cells (left) with the initial galvanostatic charge/discharge curves (middle) at 500 mA g−1, and contour plot of XRD patterns at selected ranges of degrees two theta (right) at 100 mA g−1 Rate-capability tests were evaluated at various current densities from 0.1 to 5 A g−1 in the potential range of 0.8 to 2.8 V, as shown in Fig. 4d. It is evident that S@Con-HC exhibits the highest reversible capabilities of ~820, 498, 383, 313, 269, and 220 mA h g−1 at 0.1, 0.2, 0.5, 1, 2, and 5 A g−1, respectively, compared to the S@HC and S@CoS2-HC (Supplementary Fig. 17). When the discharge/charge rate is brought back to the initial rate of 0.1 A g−1, RT-Na/S@Con-HC shows amazing reversible capacity of 625 mA h g−1 after 100 cycles (367 mA h g−1 for RT-Na/S@HC). A comparison of the rate capability versus current density of S@Con-HC with the state-of-the-art in the literature is presented in Fig. 4e; to the best of our knowledge, such an exceedingly high rate capability of RT-Na/S batteries has not been reported previously 39,40,41,42,43,44,45. The polarized Con-HC host is responsible for the prevailing Na-storage properties of S@Con-HC, which plays key roles in maximizing sulfur/polysulfides immobilization and activation via strong electrocatalytic atomic Co, reaching performance that is among the best in the field of RT-Na/S batteries. Mechanistic investigation on sodium-storage of the sulfur cathode To investigate the mechanism of S@Con-HC, CV, in situ Raman spectroscopy (at 500 mA g−1) and in situ synchrotron XRD (λ = 0.6883 Å) data, using the Powder Diffraction Beamline (Australian Synchrotron), were collected for the initial galvanostatic charge/discharge and the second discharge curve (at 100 mA g−1). Figure 5a presents cyclic voltammograms of S@Con-HC, while voltammograms for S@Co-HC, S@CoS2-HC, and S@HC are shown in Supplementary Fig. 18 (details see Supplementary Note 3). The RT-Na/S@Con-HC cell shows two prominent peaks at around 1.68 and 1.04 V during the first cathodic scan. The peak at 1.68 V corresponds to the transition from solid S to dissolved liquid long-chain polysulfides (Na2Sx, 4 < x ≤ 8)46; in the following cathodic sweep from 1.68 to 1.04 V, the long-chain polysulfides are further sodiated to Na2S4 and then short-chain polysulfides are sodiated (Na2Sy, 1 < y ≤ 3)20. Significantly, the following cathodic peaks move toward positive potential after the first CV cycle, corresponding to the results for the discharge/charge curves, which also demonstrates the formation of Co−S bonds in S@Con-HC. Meanwhile, operando Raman spectra and synchrotron XRD patterns complementarily confirm the mechanism mentioned above. As illustrated in Fig. 5b, when the cell is discharged to 1.60 V, the S stretching vibration band at 475 cm−1 disappears and another peak (451 cm−1) appears, which could be assigned to Na2S447. Correspondingly, in situ synchrotron XRD (Fig. 5c) demonstrates broadening of a peak at 23.01°, indexed to the (240) planes of S (JCPDF no. 71-0569), upon discharge to 1.8 V. A new peak (22.97o) evolves around the original peak (23.01°), which could be attributed to the formation of long-chain polysulfides (Na2Sx). When further discharged to 1.4 V, the Na2Sx peak gradually disappeared and a new peak at 13.22° developed, which can be attributed to the (213) planes of Na2S4 (JCPDF no. 71-0516). When discharged to 1.30 V, not only is there a main broad band at 451 cm−1, but also a new peak at 472 cm−1 that appears in the Raman spectra; this new peak could be attributed to the Na2S247. Consistently, a new peak at 18.73° in the synchrotron XRD pattern for the sample discharged to 1.2 V could be attributed to the (104) peak of Na2S2 (JCPDF no. 81-1764)20. Furthermore, the in situ Raman spectrum of S@Con-HC that is discharged to 1.0 V also exhibits a new peak at 475 cm−1. Given the similar Raman fringes of Na2S and S847,48, it mostly indicates the formation of Na2S47; when fully discharged to 0.8 V, the only band at 475 cm−1 demonstrates that the final product is Na2S. It is convincing that a new peak generated at 17.07° could be assigned to the (220) planes of Na2S as well, as shown in Fig. 5c (JCPDF no. 77-2149)20. Therefore, the first discharge mechanism is proposed to be as follows: $${\mathrm{S}} \to {\mathrm{Na}}_{\mathrm{2}}{\mathrm{S}}_{\mathrm{x}} \to {\mathrm{Na}}_{\mathrm{2}}{\mathrm{S}}_{\mathrm{4}} \to {\mathrm{Na}}_{\mathrm{2}}{\mathrm{S}}_{\mathrm{2}} \to {\mathrm{Na}}_{\mathrm{2}}{\mathrm{S}}.$$ When the cell is charged back to 2.8 V, Na2S2 and S are not detectable by in situ Raman spectroscopy or in situ synchrotron XRD, indicating that the reaction is not (or is only slightly) reversible; the processes from Na2S to Na2S4 and to Na2Sx are expected to be reversible. The peaks corresponding to Na2S in the Raman spectra and in the synchrotron XRD patterns always exist after its initial generation, which is probably due to the partial reversibility of the final Na2S product, thus accumulating during the prolonged discharge/charge process. Significantly, synchrotron XRD data for the second discharge process do not show any trace of Na2S2, and the diffraction peak intensity of Na2S4 obviously decreases. It indicates that the reaction rate of reduction from Na2S4 into Na2S is very fast. We thoroughly analyzed this phenomenon, and proposed a new mechanism in which atomic Co could quickly catalyze the reduction of Na2S4 into Na2S; this electrocatalytic reaction could effectively slow down the dissolution of Na2S4 during cycling as well as result in the excellent electrochemical performance of S@Con-HC. Furthermore, the polysulfide dissolution behaviors of S@Con-HC and S@HC electrodes using transparent glass cells are compared in Supplementary Fig. 19. The cell with S@Con-HC remained colorless during the 10-h discharge process, which implies the alleviation of the polysulfide dissolution and suggests that atomic Co could kinetically catalyze the polysulfide reduction to Na2S instead of dissolution into the electrolyte. However, the yellow polysulfide on the surface of the S@HC electrode was observed upon discharge for 5 h; when upon a 10-h sodiation process, it could be clearly seen that yellow polysulfide dissolved in the cell. This color change of S@HC indicates that the polysulfide dissolution into electrolyte, i.e. shuttle effect, could lead to a loss of active materials. In order to guarantee reliability of the capacity of the S@Con-HC cathode, the capacity contribution of the S host, Con-HC, was evaluated as well. The Con-HC was fabricated from the S@Con-HC sample by dissolving the loaded S with CS2 solvent. The XRD results of Con-HC and S@Con-HC are shown in Supplementary Fig. 20. It could be clearly seen that Con-HC does not show any S characteristic peaks, indicating that S has been completely removed. The discharge/charge profiles and cycling performance of Con-HC are shown in Supplementary Fig. 21a, which displays a very low initial reversible capacity of 70 mA h g−1, only retaining a reversible capacity of 40.1 mA h g−1 after 200 cycles. By contrast, Supplementary Fig. 21b clearly shows that the capacity contribution of Con-HC in the S@Con-HC cathode could be negligible. Meanwhile, the compositional and morphological changes of S@Con-HC after 600 cycles are shown in Supplementary Fig. 22, which also indicated that the atomic Co in S@Con-HC could effectively enhance the reversible capacity of the RT-Na/S@Con-HC batteries. In order to confirm our hypothesis, ab initio molecular dynamics (AIMD) simulations are used to reveal the decomposition of the Na2S4 cluster adsorption process on atomic Co/carbon (Fig. 6a) and carbon support (Fig. 6b). Figure 6a, b shows the decomposition of Na2S4 cluster and evolution into Na2S3 cluster, Na2S2 cluster, and Na2S cluster on atomic Co/carbon and carbon support. An ideal model of a sp3 carbon, including 216 C atoms and two exposed surfaces terminated by 72 H atoms49, is applied in modeling the carbon support to calculate the adsorption of a Na2S4 cluster. The DFT calculations were conducted by considering the single atomic Co occupying 41% in the S@Con-HC and the Co6 cluster consisting of six Co atoms with the size of ~0.1 nm. The adsorption energy was defined as: E(ad) = E(ad/surf) − E(surf) − E(ad), where E(ad/surf), E(surf), and E(ad) are the total energies of the adsorbates binding to surface, clean surface and free adsorbate in gas phase, respectively. The adsorption energy of Na2S4 cluster on carbon support is −0.64 eV. The binding energy of the Co6 cluster with the carbon support layer is −1.21 eV; meanwhile, the Na2S4 initially adsorb on the Co6 cluster with the binding energy of −0.64 eV, which is the same with that on the sp3 carbon surface. However, the Na2S4 structure was observed to decompose spontaneously on the Co6 cluster during the AIMD simulation; for pure carbon support, Na2S4 could not be decomposed. As presented in Fig. 6a, Na2S3, Na2S2, and Na2S clusters were identified respectively on the Co6 cluster and the dissociated S atoms were trapped by the Co6 cluster. Figure 6c displays the relative adsorption energies of these sodium polysulfide clusters and the corresponding data are listed in Supplementary Table 1, showing that the adsorption energy of Na2S4 on Co6 is −4.33 eV; for Na2S3, the adsorption energy is negatively shifted to −4.85 eV. Furthermore, the adsorption energy of Na2S2 is −7.85 eV; surprisingly, the adsorption energy of Na2S negatively shifts to −10.67 eV. This strong adsorption energy of Na2S indicates that the reaction from Na4S2 into Na2S is kinetically fast. It is evident that the binding energies of these sodium polysulfide clusters were much stronger than those on pure carbon support, indicating that the decomposition of Na2S4 in the presence of the Co6 cluster could be electrocatalyzed, consistent with the speculation from operando Raman and synchrotron XRD results. The schematic illustrations of electrode reaction mechanisms for the S@Con-HC and S@HC are shown in Fig. 6d, e. These atomic Co, with surface sulfurization, could effectively alleviate the polysulfides dissolution based on polar−polar interactions. Moreover, the confined polysulfides in the inner carbon shell could be fully catalytically reduced into Na2S by atomic Co, leading to high S utilization. Therefore, the atomic Co in S@Con-HC plays a critical role in achieving sustainable cycling stability and high reversible capacity. By contrast, the intensive "shuttle effect" and incomplete sodiation reactions result in the inferior performance of the S@HC cathode. Density functional theory results and electrode reaction mechanism. a Optimized structures of Na2S4 cluster on carbon-supported Co6 cluster, and b on carbon support. Purple: Na; yellow: S; blue: Co; gray: C; white: H. c Energy profiles of Na2S4 adsorption on carbon-supported Co6 cluster (in blue) and carbon support (in red). d, e Schematic illustrations of electrode reaction mechanism of atomic cobalt-decorated hollow carbon sulfur host (S@Con-HC) and hollow carbon hosting sulfur (S@HC), respectively Overall, atomic Co, including SA Co and Co clusters, is successfully applied into RT-Na/S batteries as a superior electrocatalytic host. The novel S@Con-HC electrode delivers a high initial reversible capacity of 1081 mA h g−1; even after 600 cycles, it achieves a superior reversible capacity of 508 mA h g−1 at 100 mA g−1 without any degeneration of the elaborate nanostructure. The atomic scale of polarized Co is responsible for the outstanding enhancement of the S cathode, which is reaching the limitation of Co (Co-S) for S/polysulfides immobilization and activation in RT-Na/S batteries. Meanwhile, in situ Raman, synchrotron XRD, and DFT are combined to confirm that atomic Co could electrocatalytically reduce Na2S4 into Na2S, which effectively alleviates dissolution of polysulfides and thus impeding the shuttle effect. Significantly, this work introduces atomic Co into electrode design, which innovatively bridges battery and electrocatalyst fields and provides a new exploration direction for novel design of electrode materials for the advancement of various battery technologies, especially in RT-Na/S batteries. Synthesis of hollow carbon nanospheres Commercial silicon nanoparticles (~60–70 nm), utilized as hard templates, were first coated with resorcinol formaldehyde (RF) via a sol−gel process. Specifically, 0.15 g Si nanoparticles and 0.46 g cetyltrimethylammonium bromide (CTAB) were added in 14.08 mL of H2O and transferred into a three-neck round-bottom flask. A homogenous dispersion could be obtained after continuous ultrasonication and stirring for 0.5 h, respectively. Secondly, 0.7 g resorcinol, 56.4 mL of absolute ethanol, and 0.2 mL of NH4OH were added in the dispersion sequentially; the flask was maintained at 35 °C with stirring for 0.5 h, followed by the addition of 0.1 mL formalin. The RF polymerization could be completed after continually stirring for 6 h at 35 °C and ageing overnight. The obtained Si@RF nanospheres were collected and washed with deionized water and alcohol, respectively. The core-shell Si@C sample was prepared by calcination of the Si@RF powder at 600 °C for 4 h (5 °C min−1) in N2 atmosphere. Finally, hollow carbon nanospheres (HC) were prepared by etching the Si template away with a 2.0 M NaOH solution. Synthesis of different sulfur cathode samples A sulfur host, cobalt nanoparticles-decorated HC (Co-HC), was synthesized by uniform dispersion of 44.76 mg CoCl2 and 100 mg HC in ethanol via ultrasonication. The HC containing CoCl2 was then heated overnight in a blast oven at 80 °C, by which the mixture could solidify and shrink along with the ethanol evaporation. Afterwards, the above mixture was reduced at 200 °C for 2 h in a forming gas with 10 vol% H2 in nitrogen, leading to the formation of Co-HC. Three S cathode samples were fabricated accordingly based on this Co-HC host. A mixture of Co-HC:S with a weight ratio of 1:1.5 was first ground by mortar and pestle, and then sealed in a Teflon-lined autoclave. A primary S cathode, S/Co-HC composite, was obtained after the autoclave was heated at 155 °C for 12 h. When the obtained S/Co-HC composite was further sealed in a quartz ampoule, and thermally treated at 300 and 400 °C for 2 h in N2 atmosphere, respectively, two new samples denoted as S@Con-HC and S@CoS2-HC could be synthesized. In addition, a contrast sample with plain HC as S host was prepared, in which S was embedded into the plain HC frameworks (denoted as S@HC). The synthesis procedures are the same as that of S@Con-HC by utilizing HC instead of Co-HC. Structural characterization The morphologies of the samples were investigated by SEM (JEOL 7500), TEM (JEOL 2011, 200 keV), and STEM (JEOL ARM-200F, 200 keV). The XRD patterns were collected by powder XRD (GBC MMA diffractometer) with Cu Kα radiation at a scan rate of 1o min−1. XPS measurements were carried out using Al Kα radiation and fixed analyzer transmission mode: the pass energy was 60 eV for the survey spectra and 20 eV for the specific elements. Electrochemical measurements The electrochemical tests were conducted by assembling coin-type half-cells in an argon-filled glove box. The slurry was prepared by fully mixing 70 wt% active materials (S/Co-HC, S@Con-HC, S@CoS2-HC, S@HC), 10 wt% carbon black, and 20 wt% carboxymethyl cellulose (CMC) in an appropriate amount of water via a planetary mixer (KK-250S). Then, the obtained slurry was pasted on Cu foil using a doctor blade with a thickness of 100 µm, which was followed by drying at 50 °C in a vacuum oven overnight. The working electrode was prepared by punching the electrode film into discs of 0.97 cm diameter. The sodium foil was employed as both reference and counter electrode. The electrodes were separated by a glass fiber separator. Electrolyte, 1.0 M NaClO4 in propylene carbonate/ethylene carbonate with a volume ratio of 1:1 and 5 wt% fluoroethylene carbonate additive (PC/EC + 5 wt% FEC), was prepared and used in this work. The electrochemical performance was tested on a LAND Battery Tester with a voltage window of 0.8–2.8 V. All the capacities of cells have been normalized based on the weight of sulfur. CV was performed using a Biologic VMP-3 electrochemical workstation. In situ measurements The in situ Raman cell was bought from Shenzhen Kejing star. The in situ Raman was collected with a Renishaw InVia Raman microscope, with excitation 532 nm laser wavelengths and L50× objective lens. The spectra were collected in galvanostatic mode when the in situ Raman cell was discharged/charged at a current rate of 500 mA g−1 using a computer controller (CHI 660D). The acquisition time of each Raman spectrum was 60 s; and lower laser power was utilized to avoid electrode damage during the long-term measurements. For in situ synchrotron XRD measurements, the cells were similar to the above-mentioned coin cells for electrochemical performance testing. To enhance the diffraction peak intensity, a thicker layer of cathode material was loaded on the Cu foil, with loading up to 5 mg cm−2. To guarantee that the X-ray beams could penetrate the whole cell and that the electrochemical reactions could be monitored, three 4-mm diameter holes were punched in the negative and positive caps as well as the spacer. Then, Kapton film (only showing low-intensity responses in XRD patterns) was used to cover the holes in the negative and positive caps, and AB glue was used for complete sealing. The charge/discharge process was conducted with a battery test system (Neware) that was connected to the cell. Computational methods The spin-polarized electronic structure calculations were performed in the Vienna Ab-initio Simulation Package code with Perdew-Burke-Ernzerhof (PBE) functional of exchange-correlation. The projector-augmented-wave (PAW) pseudopotentials were utilized to describe core electron interactions50,51,52. Considering the significance of van der Waals (vdW) forces to the adsorption, we utilized the D3 dispersion vdW corrections with zero damping for describing the vdW interactions.53,54 The Co cluster consisted of six Co atoms with a size of ~0.1 nm and the Co−Co bond distances was 2.24 Å. The Na2S4 cluster was obtained after 10 ps of AIMD simulations at 350 K at first and the final structure was optimized. To gain insights into the Na2S4 dissociative adsorption on carbon-supported Co6 cluster, we firstly performed the AIMD simulation for 10 ps (10,000 steps, 1 fs per step) within the canonical (NVT) ensemble at 350 K to accelerate the dissociation rate of Na2S4 cluster on carbon-supported Co6 cluster. During the AIMD simulations, the carbon support was fixed while the Co6 and Na2S4 clusters were allowed to move. Secondly, we chose some representative sodium polysulfide structures, i.e., Na2S3, Na2S2 and Na2S clusters, which were observed from molecular dynamics simulations. Thirdly, the geometries of these sodium polysulfide clusters were optimized to calculate the total energies. The cut-off energy was set to 370 eV for molecular dynamics simulations and the cut-off energy was 450 eV for geometry optimizations aiming to get the accurate energy. A gamma Monkhorst-Pack k-point sampling was used. In this paper, the adsorption energy was defined as: E(ad) = E(ad/surf) − E(surf) − E(ad), where E(ad/surf), E(surf), and E(ad) are the total energies of the adsorbates binding to surface, clean surface and free adsorbate in gas phase, respectively. The data that support the findings of this work are available from the corresponding author upon reasonable request. Dunn, B., Kamath, H. & Tarascon, J. M. Electrical energy storage for the grid: a battery of choices. Science 334, 928 (2011). Armand, M. & Tarascon, J. M. Building better batteries. Nature 451, 652–657 (2008). Sathiya, M. et al. Reversible anionic redox chemistry in high-capacity layered-oxide electrodes. Nat. Mater. 12, 827–835 (2013). Tan, G. et al. Freestanding three-dimensional core-shell nanoarrays for lithium-ion battery anodes. Nat. Commun. 7, 11774 (2016). Rogers, J. A., Someya, T. & Huang, Y. Materials and mechanics for stretchable electronics. Science 327, 1603 (2010). Zhang, W., Mao, J., Li, S., Chen, Z. & Guo, Z. Phosphorus-based alloy materials for advanced potassium-ion battery anode. J. Am. Chem. Soc. 139, 3316–3319 (2017). Yang, C. P., Yin, Y. X., Guo, Y. G. & Wan, L. J. Electrochemical (de)lithiation of 1D sulfur chains in Li-S batteries: a model system study. J. Am. Chem. Soc. 137, 2215–2218 (2015). Manthiram, A., Fu, Y., Chung, S. H., Zu, C. & Su, Y. S. Rechargeable lithium–sulfur batteries. Chem. Rev. 114, 11751–11787 (2014). Seh, Z. W., Sun, Y., Zhang, Q. & Cui, Y. Designing high-energy lithium-sulfur batteries. Chem. Soc. Rev. 45, 5605–5634 (2016). Ji, X., Lee, K. T. & Nazar, L. F. A highly ordered nanostructured carbon-sulphur cathode for lithium-sulphur batteries. Nat. Mater. 8, 500–506 (2009). Zhou, G., Paek, E., Hwang, G. S. & Manthiram, A. Long-life Li/polysulphide batteries with high sulphur loading enabled by lightweight three-dimensional nitrogen/sulphur-codoped graphene sponge. Nat. Commun. 6, 7760 (2015). Hwang, J. Y., Myung, S. T. & Sun, Y. K. Sodium-ion batteries: present and future. Chem. Soc. Rev. 46, 3529–3614 (2017). Chao, D. et al. Array of nanosheets render ultrafast and high-capacity Na-ion storage by tunable pseudocapacitance. Nat. Commun. 7, 12122 (2016). 12122. Yabuuchi, N., Kubota, K., Dahbi, M. & Komaba, S. Research development on sodium-ion batteries. Chem. Rev. 114, 11636–11682 (2014). Xin, S., Yin, Y. X., Guo, Y. G. & Wan, L. J. A high-energy room-temperature sodium-sulfur battery. Adv. Mater. 26, 1261–1265 (2014). Lu, X. et al. Advanced intermediate-temperature Na-S battery. Energy Environ. Sci. 6, 299–306 (2013). Hueso, K. B., Armand, M. & Rojo, T. High temperature sodium batteries: status, challenges and future trends. Energy Environ. Sci. 6, 734 (2013). Wei, S. et al. A stable room-temperature sodium-sulfur battery. Nat. Commun. 7, 11722 (2016). Wei, S., Ma, L., Hendrickson, K. E., Tu, Z. & Archer, L. A. Metal-sulfur battery cathodes based on PAN-sulfur composites. J. Am. Chem. Soc. 137, 12143–12152 (2015). Wang, Y. X. et al. Achieving high-performance room-temperature sodium-sulfur batteries with S@Interconnected mesoporous carbon hollow nanospheres. J. Am. Chem. Soc. 138, 16576–16579 (2016). Hwang, T. H., Jung, D. S., Kim, J. S., Kim, B. G. & Choi, J. W. One-dimensional carbon-sulfur composite fibers for Na-S rechargeable batteries operating at room temperature. Nano Lett. 13, 4532–4538 (2013). Pang, Q., Kundu, D., Cuisinier, M. & Nazar, L. F. Surface-enhanced redox chemistry of polysulphides on a metallic and polar host for lithium-sulphur batteries. Nat. Commun. 5, 4759 (2014). Zhou, G. et al. Catalytic oxidation of Li2S on the surface of metal sulfides for Li-Sbatteries. Proc. Natl. Acad. Sci. USA 114, 840–845 (2017). Zheng, S. et al. Nano-copper-assisted immobilization of sulfur in high-surface-area mesoporous carbon cathodes for room temperature Na-S batteries. Adv. Energy Mater. 4, 1400226 (2014). Zhang, B. W. et al. In situ grown S nanosheets on Cu foam: an ultrahigh electroactive cathode for room-temperature Na-S batteries. ACS Appl. Mater. Interfaces 9, 24446–24450 (2017). Tyo, E. C. & Vajda, S. Catalysis by clusters with precise numbers of atoms. Nat. Nano 10, 577–588 (2015). Yao, S. et al. Atomic-layered Au clusters on α-MoC as catalysts for the low-temperature water-gas shift reaction. Science 357, 389–393 (2017). Liu, P. et al. Photochemical route for synthesizing atomically dispersed palladium catalysts. Science 352, 797 (2016). Yang, X. F. et al. Single-atom catalysts: a new frontier in heterogeneous catalysis. Acc. Chem. Res. 46, 1740–1748 (2013). Qiao, B. et al. Single-atom catalysis of CO oxidation using Pt1/FeOx. Nat. Chem. 3, 634–641 (2011). Jones, J. et al. Thermally stable single-atom platinum-on-ceria catalysts via atom trapping. Science 353, 150 (2016). Deng, D. et al. A single iron site confined in a graphene matrix for the catalytic oxidation of benzene at room temperature. Sci. Adv. 1, e1500462 (2015). Article ADS Google Scholar Li, G. et al. Three-dimensional porous carbon composites containing high sulfur nanoparticle content for high-performance lithium-sulfur batteries. Nat. Commun. 7, 10601 (2016). Liu, W. et al. Single-atom dispersed Co-N-C catalyst: structure identification and performance for hydrogenative coupling of nitroarenes. Chem. Sci. 7, 5758–5764 (2016). Ganesan, P., Prabu, M., Sanetuntikul, J. & Shanmugam, S. Cobalt sulfide nanoparticles grown on nitrogen and sulfur codoped graphene oxide: an efficient electrocatalyst for oxygen reduction and evolution reactions. ACS Catal. 5, 3625–3637 (2015). Wang, J. et al. Sulfur composite cathode materials for rechargeable lithium batteries. Adv. Funct. Mater. 13, 487–492 (2003). Zhang, B., Qin, X., Li, G. R. & Gao, X. P. Enhancement of long stability of sulfur cathode by encapsulating sulfur into micropores of carbon spheres. Energy Environ. Sci. 3, 1531–1537 (2010). Yu, X. & Manthiram, A. Room-temperature sodium-sulfur batteries with liquid-phase sodium polysulfide catholytes and binder-free multiwall carbon nanotube fabric electrodes. J. Phys. Chem. C 118, 22952–22959 (2014). Yu, X. & Manthiram, A. Performance enhancement and mechanistic studies of room-temperature sodium-sulfur batteries with a carbon-coated functional Nafion separator and a Na2S/activated carbon nanofiber cathode. Chem. Mater. 28, 896–905 (2016). Park, C. W., Ahn, J. H., Ryu, H. S., Kim, K. W. & Ahn, H. J. Room-temperature solid-state sodium/sulfur battery. Electrochem. Solid St. 9, A123–A125 (2006). Wenzel, S. et al. Thermodynamics and cell chemistry of room temperature sodium/sulfur cells with liquid and liquid/solid electrolyte. J. Power Sources 243, 758–765 (2013). Lee, D. J. et al. Alternative materials for sodium ion-sulphur batteries. J. Mater. Chem. A 1, 5256 (2013). Qiang, Z. et al. Ultra-long cycle life, low-cost room temperature sodium-sulfur batteries enabled by highly doped (N,S) nanoporous carbons. Nano Energy 32, 59–66 (2017). Yao, Y. et al. Binding S0.6Se0.4 in 1D carbon nanofiber with C-S bonding for high-performance flexible Li-S batteries and Na-S batteries. Small 13, 1603513 (2017). Lu, Q. et al. Freestanding carbon fiber cloth/sulfur composites for flexible room-temperature sodium-sulfur batteries. Energy Storage Mater. 8, 77–84 (2017). Yu, X. & Manthiram, A. Capacity enhancement and discharge mechanisms of room-temperature sodium-sulfur batteries. ChemElectroChem 1, 1275–1280 (2014). El Jaroudi, O., Picquenard, E., Gobeltz, N., Demortier, A. & Corset, J. Raman spectroscopy study of the reaction between sodium sulfide or disulfide and sulfur: identity of the species formed in solid and liquid phases. Inorg. Chem. 38, 2917–2923 (1999). Janz, G. J. et al. Raman studies of sulfur-containing anions in inorganic polysulfides. Sodium Polysulfides. Inorg. Chem. 15, 1759–1763 (1976). Peng, X. X. et al. Graphitized porous carbon materials with high sulfur loading for lithium-sulfur batteries. Nano Energy 32, 503–510 (2017). Kresse, G. & Hafner, J. Abinitio molecular-dynamics for liquid-metals. Phys. Rev. B 47, 558–561 (1993). Kresse, G. & Hafner, J. Ab-initio molecular-dynamics for open-dynamics for open-shell transition-metals. Phys. Rev. B 48, 13115–13118 (1993). Kresse, G. & Furthmuller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996). Grimme, S., Antony, J., Ehrlich, S. & Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J. Chem. Phys. 132, 154104 (2010). Xu, G. et al. Electrostatic self-assembly enabling integrated bulk and interfacial sodium storage in 3D titania-graphene hybrid. Nano Lett. 18, 336–346 (2017). This research was supported by the Australian Research Council (ARC) (DE170100928), the Commonwealth of Australia through the Automotive Australia 2020 Cooperative Research Centre (Auto CRC). The authors acknowledge the use of the facilities at the UOW Electron Microscopy Centre funded by ARC grants (LE0882813 and LE0237478) and Dr. Tania Silver for her critical reading. Institute for Superconducting and Electronic Materials, Australian Institute of Innovative Materials, University of Wollongong, Innovation Campus, Squires Way, North Wollongong, NSW, 2500, Australia Bin-Wei Zhang, Yun-Xiao Wang, Lei Zhang, Wei-Hong Lai, Li Wang, Shu-Lei Chou, Hua-Kun Liu & Shi-Xue Dou College of Chemistry and Materials Science, Anhui Normal University, 241000, Wuhu, P.R. China Tian Sheng Hunnan Key Laboratory of Micro-Nano Energy Materials and Devices, Xiangtan University, 411105, Hunan, P.R. China Yun-Dan Liu State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, 201620, Shanghai, P.R. China Jianping Yang Australian Synchrotron, 800 Blackburn Road, Clayton, VIC, 3168, Australia Qin-Fen Gu Bin-Wei Zhang Yun-Xiao Wang Lei Zhang Wei-Hong Lai Shu-Lei Chou Hua-Kun Liu Shi-Xue Dou B.-W.Z., Y.-X.W., and S.-L.C. conceived and designed the experiments. B.-W.Z. performed all synthetic and characterization experiments. T.S. performed ab initio molecular dynamics simulations. Y.-D.L. performed Raman experiments. L.Z. and W.-H.L. performed the TGA experiments. B.-W.Z., L.W., and Q.-F.G. performed synchrotron X-ray diffraction measurements, and J.Y. performed the ICP measurement. B.-W.Z., Y.-X.W., S.-L.C., H.-K.L., and S.-X.D. analyzed the data and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Yun-Xiao Wang or Shu-Lei Chou. Zhang, BW., Sheng, T., Liu, YD. et al. Atomic cobalt as an efficient electrocatalyst in sulfur cathodes for superior room-temperature sodium-sulfur batteries. Nat Commun 9, 4082 (2018). https://doi.org/10.1038/s41467-018-06144-x DOI: https://doi.org/10.1038/s41467-018-06144-x Highly efficient and selective electrocatalytic hydrogen peroxide production on Co-O-C active centers on graphene oxide Tao Zheng Liming Dai Communications Chemistry (2022) Emerging low-nuclearity supported metal catalysts with atomic level precision for efficient heterogeneous catalysis Xiaobo Zheng Beibei Li Yadong Li Nano Research (2022) Regulating the Deposition of Insoluble Sulfur Species for Room Temperature Sodium-Sulfur Batteries Chaozhi Wang Jingqin Cui Nanfeng Zheng Chemical Research in Chinese Universities (2022) A Fe3N/carbon composite electrocatalyst for effective polysulfides regulation in room-temperature Na-S batteries Yuruo Qi Qing-Jie Li Maowen Xu Understanding Sulfur Redox Mechanisms in Different Electrolytes for Room-Temperature Na–S Batteries Hanwen Liu Shi Xue Dou Nano-Micro Letters (2021)
CommonCrawl
Why does "high SWR" damage transmitters, instead of "impedance mismatch"? In the audio realm, where cable lengths are insignificant compared to the signal wavelength, I would understand an impedance mismatch damaging an amplifier as follows: Take an audio amplifier that can deliver 100 watts of power into an 8 ohm speaker driver. So under normal operating conditions, by P = R * I^2, there would be 3.5A of current flowing to deliver these 100W. If this amplifier were connected instead to 2 ohm speakers, it would need to pass 7A of current assuming it will still attempt to deliver this same 100W. Viewed even more simply: a 2 ohm load tends to "short circuit" an amp designed for an 8 ohm load. Further to this logic, connecting the same amplifier to a 32 ohm load would not hurt it, as the resulting current would be smaller than expected. With radio frequency, we don't really worry about the impedance being "too low" but rather "too mismatched" and we use SWR to represent this mismatch. Why in RF is connecting a 500 ohm load i.e. 10:1 SWR, to my amplifier's 50 ohm output, considered equally bad as causing a 10:1 SWR by connecting a 5 ohm load instead? Do the two 10:1 SWR cases cause amplifier failure for different reasons? I wonder if in the 5 ohm case it is just a simple "too much current" like in the audio case, but in the 500 ohm case the transmission line somehow ends up increasing the voltage beyond what the transistors can handle. Does it make any difference if we eliminate the transmission line altogether, so that standing waves can't really develop? Would it be okay to connect a high-impedance antenna feedpoint directly to a transmitter without problems, whereas a low-impedance antenna might still cause an overcurrent condition? electronics amplifier impedance natevw - AF7TBnatevw - AF7TB $\begingroup$ A similar question (which unfortunately has no answers that actually understand the question, in my opinion). $\endgroup$ – Kevin Reid AG6YO♦ May 7 '16 at 1:44 $\begingroup$ Not high SWR destroys radio, but very high voltage, or big temperature in PA stage. Both can be result of impedance mismatch / SWR. "High SWR damage" is only short expression $\endgroup$ – Jacek Cz May 7 '16 at 6:24 $\begingroup$ Acoustic speaker impedance (almost resistance BTW) has character "minimal". In antenna has character "equal" (as posibble). For example acoustic amplifier cannot be damaged by open speaker cable. Deep differences $\endgroup$ – Jacek Cz May 7 '16 at 6:28 The VSWR isn't the problem per se, it's just the impedance that appears at the transmitter's terminals. Take the load at the end of the transmission line, transform it according to the electrical length of the feedline, and put that equivalent impedance right at the transmitter's terminals and you will have the same damage. A particular VSWR can result in a range of impedances. For example 5:1 on a 50 ohm line can mean 10 ohms, 250 ohms, and a range of complex impedances in between. Some of those impedances may damage the transmitter, others may not. But since the length of the feedline isn't known, it's safest to keep the VSWR low. So why can a mismatched load cause damage? In summary, the transmitter's final stage is made of reactive components, that is components that store and release energy. These reactive components are selected to have the load absorb some of that stored energy. Without that load absorbing energy, the energy instead appears as a high voltage or high current to appear somewhere which may not be equipped to handle it. Let's give a very simple example to illustrate: a very simplified class-C common-source amplifier. You've probably seen such amplifiers with a resistor in place of L1, but for power amplifiers it's more common to use inductors to avoid the associated resistive losses. simulate this circuit – Schematic created using CircuitLab When the transistor is on, current through L1 increases and L1 stores energy. Let's say it's on long enough for current through L1 to rise to 1A. When the transistor is off, the voltage across the load will rise to 50V, because that's what it takes for 1A to pass through 50Ω. So we need M1 to have $V_\text{ds(max)} > 50\:\mathrm V$. What happens if the load is open? Since no current can flow through the load, the voltage will rise even higher, whatever it takes to get 1A to continue flowing through L1. In this case it will be whatever voltage causes M1 to go into avalanche breakdown. RF power transistors aren't usually avalanche rated, so this means damage. You can imagine other situations that lead to excessive drain current in M1 as well. Of course, a real transmitter will be more complicated. It will probably have a low-pass filter on its output, multiple transistors with complex impedances at RF, and more reactive components within for impedance matching and filtering. All these values are selected with the assumption of a 50 ohm resistive load. Which load impedances will cause failure depend on the particular topology of the amplifier. Phil Frost - W8IIPhil Frost - W8II Since writing this answer, I've learned this explanation isn't entirely accurate. I'm leaving it because it's not entirely wrong, either. Ultimately, it is impedance mismatch (really, too low an impedance) that damages transmitters. The thing to realize is the impedance seen by the final power transistors in the amplifier (the part that usually breaks) isn't the same impedance at the amplifier's terminals. The reason is that amplifiers contain filters on their output to filter harmonic suppression, among other things. These filters are designed with the assumption of a 50Ω resistive load. When that assumption holds, the filter presents the design impedance to the finals, which the designer has determined will not smoke the transistors. When the load is not 50Ω, the design conditions are violated and the impedance presented to the finals could be anything. If you get lucky, it will be a high impedance which just means the amplifier can't deliver its full rated power. If you get unlucky, it's a low impedance which draws too much current and overheats the transistors. It's hard to predict where you will get lucky, and where you will get unlucky without knowing the details of the transmitter's filters. Here's an example from Is there an optimum transmission line length for maximum power transfer? This is a 30/20m filter, so it is intended to pass up to about 14.4 MHz and attenuate all the higher harmonics of that. The orange line shows the case where there's a matched 50Ω load, while the blue and tan lines show 500Ω and 5Ω loads which are two cases of a 10:1 SWR. Notice how the mismatch introduces peaks in the frequency response. Where these peaks are above the orange line, the amplifier is seeing a lower impedance, thus a higher current and more power. Here's the potential for damage. One of those blue peaks is right at 14 MHz and is about 10dB above the orange line. So on 20 meters, a 100W amplifier is suddenly trying to produce 1000W into a low impedance, which will quickly damage it. There are any number of impedances which will result in a 10:1 SWR, and depending on the transmission line you might get any of them. I suggest checking out a Smith chart tutorial to get familiar with how this works. This is why SWR is used to quantify the quality of the match: it is independent of transmission line length. For any given SWR, the height of those peaks in the graph above are about the same, and moving around different impedances with the same SWR just changes where they lie. Since you really don't know where they lie in practice, it's just best to avoid high SWR generally. Transmitters provide different levels of power, efficiency and distortion as the load changes. This is studied using a technique called "load pull" by applying different loads to the transmitter and noting the effect on the three things mentioned above. Then output networks are designed to provide the most optimum trade off between these three things. For a transmitter which operates over a wide frequency range such as 1.8 to 30 MHz, this optimum load will vary somewhat with frequency. Therefore circuitry is made as broadband as possible to optimize the load over this range. At a given frequency and power level, changing the load seen by the transmitter by using an antenna tuner, for example, could further optimize this three way tradeoff, but the operator would have to monitor power output(easy to do) efficiency (somewhat harder to do) and distortion (quite hard to do). Therefore it is best to assume that 50 ohms resistive load is the best to use at all frequencies and power levels. Transmitters sort of force you to do this by backing off the power when the SWR is high, meaning the load is not 50 ohms. Bypassing the "SWR foldback" and tuning for maximum output power is likely to cause efficiency and distortion to suffer more often than not. In a similar manner, tuning for best efficiency or distortion would probably reduce the power more often than not. Getting back to the original question, IF one could monitor efficiency, power and distortion, one could tune at a given frequency and power level to make the transmitter run cooler with acceptable power and distortion. This would represent the case where the load resistance AT THE OUTPUT DEVICES was somewhat higher than normal, just like the case of the audio amplifier. In practice this is quite impossible for normal wide band transceivers, which are always operating in somewhat of a compromised condition. K4EROK4ERO Not the answer you're looking for? Browse other questions tagged electronics amplifier impedance or ask your own question. Is there an optimum transmission line length for maximum power transfer? Why does an RF Amplifier fail when the SWR is high? Antenna experimentation What is the output impedance of a typical solid state ham transmitter? Why does ice on a wire dipole affect the SWR? Rudementary Impedance measurements for VLF antenna system How can I do direct digital synthesis of a 630m WSPR signal? Impedance Matching between RF Amplifier Stages Why does a Class-C non-linear power amp distort SSB but does not distort AM? Toroid Coupling Issue in Common-base Amplifier Impedance ratio vs. SWR Looking for criticisms on my VNA-like SWR meter I am designing
CommonCrawl
Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells Juan José Romero1 na1, María Cecilia De Rossi1 na1, Camila Oses1 na1, Camila Vázquez Echegaray1, Paula Verneri1, Marcos Francia1, Alejandra Guberman1,2 & Valeria Levi ORCID: orcid.org/0000-0002-1666-46751,3 BMC Biology volume 20, Article number: 6 (2022) Cite this article The cytoskeleton is a key component of the system responsible for transmitting mechanical cues from the cellular environment to the nucleus, where they trigger downstream responses. This communication is particularly relevant in embryonic stem (ES) cells since forces can regulate cell fate and guide developmental processes. However, little is known regarding cytoskeleton organization in ES cells, and thus, relevant aspects of nuclear-cytoskeletal interactions remain elusive. We explored the three-dimensional distribution of the cytoskeleton in live ES cells and show that these filaments affect the shape of the nucleus. Next, we evaluated if cytoskeletal components indirectly modulate the binding of the pluripotency transcription factor OCT4 to chromatin targets. We show that actin depolymerization triggers OCT4 binding to chromatin sites whereas vimentin disruption produces the opposite effect. In contrast to actin, vimentin contributes to the preservation of OCT4-chromatin interactions and, consequently, may have a pro-stemness role. Our results suggest roles of components of the cytoskeleton in shaping the nucleus of ES cells, influencing the interactions of the transcription factor OCT4 with the chromatin and potentially affecting pluripotency and cell fate. Cells are continuously exposed to forces that propagate to their interior through the cytoskeleton, a network of interconnected biopolymers and crosslinker molecules in constant remodeling. This filament network is also physically connected to the cell nucleus through the LINC (linker of nucleoskeleton and cytoskeleton) complex which main components KASH and SUN interact with the cytoskeleton and the nuclear intermediate filaments lamins, respectively [1], constituting a direct mechanism for communicating mechanical signals to the nucleus interior [2]. Forces applied to cells may affect the shape and position of the nucleus [3, 4] and modulate diverse aspects of its function including chromatin organization and gene expression programs [3, 5]. This relation is particularly relevant in stem cells since forces can regulate cell fate and guide developmental processes [6, 7]. In this direction, it was demonstrated that the elasticity of the cell matrix impacts on lineage specification [8] opening the possibility of manipulating cell fate decisions through the rational design of substrates for in vitro differentiation protocols [9]. However, important aspects of the cytoskeleton organization in embryonic stem (ES) cells remain elusive and thus its role in pluripotency maintenance and differentiation is not completely understood. Relevantly, disruption or alterations of cytoskeleton components as actin [10, 11] or vimentin intermediate filaments [12] affect cell fate decisions emphasizing the necessity of a three-dimensional (3D) description of the cytoskeleton organization in live ES cells. A previous work claimed that the cytoskeleton of ES cells was poorly organized from a comparative analysis of the distribution of cytoskeletal proteins in single-plane images of immunolabeled stem cells and fibroblasts [13]. However, ES cells are essentially three-dimensional objects, and thus, it is expected that single-plane observations are not sufficient to capture the complexity of the cytoskeleton. Moreover, the fixation of cells required for the immunostaining procedure can modify the 3D architecture and organization of intracellular components including the cytoskeleton [14, 15]. Here, we study the 3D distribution of different cytoskeletal filaments in live ES cells since the role of the cytoskeleton on gene expression regulation is poorly understood in the pluripotent state compared to its role during differentiation. We also evaluate if the different cytoskeleton components modulate the nuclear shape and use fluorescence correlation spectroscopy (FCS) to test if these networks affect the dynamical organization of OCT4, a key pluripotency transcription factor (TF). Together, OCT4, SOX2, and NANOG constitute the core of pluripotency defining a regulatory network that induce genes necessary to preserve pluripotency and repress those involved in differentiation [16]. Our study reveals new features of the 3D cytoskeleton organization in live ES cells that were hidden in single-plane images of fixed ES cells. We also show that alterations of either the actin or the intermediate filament vimentin networks affect the nuclear morphology and impact on OCT4-chromatin interactions in contrast to alterations of the microtubule network that does not modify these properties. These results highlight the role of specific cytoskeletal components in modulating the shape of the nucleus of ES cells and unveil its impact on the dynamical organization of a main pluripotency TF. We hypothesize that these early changes of OCT4-chromatin interactions may produce, at a longer time scale, modifications in gene expression ultimately affecting cell fate decisions. Three-dimensional organization of the cytoskeleton of mouse ES cells In order to examine the 3D organization of the cytoskeleton in naïve ES cells, we acquired confocal z-stacks of live cells co-expressing cytoskeleton-related proteins fused to green fluorescent proteins (GFP or EGFP) and the histone H2B fused to the red fluorescent protein mCherry (H2B-mCherry) to visualize the cell nucleus simultaneously. To account for our observations, we report in each case the percentage of transfected cells that present a certain cytoskeletal feature (% of ncells). We should emphasize that this percentages do not correspond to the frequency of these features in ES cells because they also depend on other factors including the expression levels of the cytoskeletal proteins (which determines the signal/noise ratio of the specific structure) and instrumental factors. Particularly, the photobleaching caused during the z-stack confocal imaging and/or the scattering produced by intracellular structures may prevent the observation of a certain cytoskeleton feature in some planes of a given cell. We first observed the microtubules using a plasmid encoding the GFP-tagged microtubule-binding domain of ensconsin (EMTB-3xGFP) [17]. Fluorescent microtubule-associated proteins are excellent tools to label microtubules in living cells since they do not alter the network organization substantially [18, 19]. Figure 1a shows representative 3D images of the cells with microtubules that spread in the cytoplasm (Additional file 1: Supplementary Video S1 and Additional file 2: Supplementary Video S2) in clear contrast to the disorganized tubulin distribution previously suggested from single-plane images of immunolabeled ES cells [20]. Nevertheless, the network does not seem to present the typical radial-like distribution observed in many somatic cells [21]. ES cells present an atypical organization of microtubules in interphase with nucleation centers. 3D confocal images of ES cells co-transfected with H2B-mCherry (red) and EMTB-3xGFP (green) (A, B) or EB3-GFP (C, D). A 3D reconstruction of representative cells showing the organization of the microtubule network (Additional file 1: Supplementary Video S1 and Additional file 2: Supplementary Video S2). Other examples of 3D reconstructions can be found in Additional file 14: Supplementary Fig. S6. B Maximum intensity projection image merged with the transmission image collected at a single plane of the z-stack; the arrow points to a microtubule-enriched cellular protrusion extending to another cell. The top image was digitally saturated to facilitate the observation of the protrusion. C Representative, single-plane image of a cell expressing EB3-GFP (top); zoom-in images of the cell region delimited by the dashed square at four different frames of the time-lapse movie showing an EB3 comet in close contact to the nucleus (bottom). D Maximum intensity projection images obtained from a 100-image stack obtained during a time-lapse experiment lasting 166.6 s (left); flow maps of the EB3-GFP comets (right). The pink asterisk shows a microtubule nucleation center from which EB3 comets emanate. Scale bars: 10 μm. Other examples of the analysis of EB3 comets can be found in Additional file 15: Supplementary Video S8 In some cases (12% of ncells = 93), we detected microtubule-enriched protrusions that extend to other cells (Fig. 1b). Similar protrusions were also observed visualizing the plasma membrane by transfection of mem-mCherry (Additional file 3: Supplementary Fig. S1) and resemble those observed by scanning electron microscopy [22]. To get further insights in the organization of microtubules, we transfected ES cells with the end-binding protein EB3 fused to GFP (EB3-GFP) that associates to the growing tip of microtubules [23, 24], and acquired time-lapse confocal images at certain optical sections of the cells (Fig. 1c, d). We recovered the trajectories of the EB3-GFP comets and analyzed these trajectories to obtain a flow map of EB3 comets as described in the "Methods" section. Representative movies obtained in these imaging experiments show that, while some EB3-GFP comets point in every direction (32% of ncells = 25, Fig. 1d top panel and Additional file 4: Supplementary Video S3), others seem to irradiate from specific sites in the cytoplasm (44% of ncells = 25, Fig. 1d bottom panel and Additional file 5: Supplementary Video S4) suggesting the presence of microtubule-organizing centers (MTOCs). Although our experiments do not allow identifying the nature of these MTOCs, a previous work described centrioles in electron microscopy images of ES cells [22]. However, another report suggests that centrioles first appear in mouse embryos after the 64-cell stage in trophectoderm cells and thus they are absent in the inner cell mass from which ES cells are derived [25] suggesting that the observed nucleation centers might be acentrosomal MTOCs. These structures were also observed in other cell types (reviewed in [26]) such as mammalian oocytes that lack centriole pairs [27] and their spindle microtubules are nucleated by multiple acentrosomal MTOCs [28]. Interestingly, some movies show EB3-GFP comets in close contact with the nucleus and other comets that seem to be poking it (24%, ncells = 25, Fig. 1c and Additional file 6: Supplementary Video S5) suggesting that they locally transmit pushing forces to this organelle. Relevantly, microtubules are usually involved in rotating and positioning the nucleus [29,30,31] and it has been previously proposed that poking microtubules produce nucleus wriggling that contribute to position this organelle [32]. We also explored the 3D distribution of actin in ES cells. Figure 2a shows that EGFP-actin displayed a diffuse organization in the cell cytoplasm in line with previous low-resolution and single-plane images of actin immunostaining in ES cells [13] and in contrast to the clear filamentous structures observed in many somatic cells [33]. Nevertheless, our 3D live imaging experiments revealed other aspects of actin organization in ES cells. Actin preferentially concentrates in filopodia, cell-cell, and cell-substrate surfaces of ES cells. A–C Representative images of cells expressing EGFP-actin (green) and H2B-mCherry (red) exemplifying those key features of actin organization described in the text. Merge images (transmission, green and red channels) obtained at specific optical planes allow identifying the relative positions of the cells within colonies (left panels). Images obtained at specific planes of the cells as detailed in each case (central panels, in pseudocolor scale) and 3D reconstructions of the green and red channels images (right panels). White arrows point to actin-enriched structures in contact with the substrate, and the red and green arrows point to actin enrichments at cell-cell contacts and filopodia linking the cell to the substrate, respectively. The yellow arrows point to regions facing the extracellular milieu of those cells at the colony border. The white asterisks indicate actin-enriched filopodia extending from the cell showed in the dotted rectangle to a neighboring cell. D Representative examples of cells showing membrane blebs (light blue arrows) (left panels, pseudocolored images). Intensity profiles along the membrane borders (dotted, gray lines in the insets). Grey triangles show the initial and final positions of the lines along the blebs. Bleb formation may be also accompanied by changes in the nucleus shape (right panel, blue asterisk). The white asterisk indicates actin-enriched filopodia (Scale bars: 10 μm). Other examples of EGFP-actin fluorescence intensity along membrane blebs can be found in Additional file 16: Supplementary Fig. S7 Figure 2a–c shows actin-enriched structures in contact with the substrate (white arrows) as expected from the involvement of this cytoskeleton component in cell-substrate interactions [34]. Previous works in human induced pluripotent stem (iPS) cells described an actin fence formed by thick fibers that organize parallel to the colony borders tightly packing the colony [35, 36]. Despite we could not observe these fences in our images of mouse ES cells, actin concentrated in close proximity to membranes facing the external milieu (68% of nperipherical cells = 88, yellow arrows in Fig. 2a, b) with short actin-enriched filopodia linking cells of the colony borders to the substrate (83% of nperipherical cells = 88, green arrow in Fig. 2c); these filopodia are also present in inner cells but in a lower proportion (41% of ninner cells = 107). This different organization of actin in mouse and human pluripotent stem cells could also explain the comparatively higher cell-extracellular matrix traction forces generated by human ES cells [37]. Human and mouse pluripotent stem cell colonies also differ in their morphology and their substrate requirement in feeder-free conditions [38,39,40] evidencing the differences in cell-cell and cell-substrate interactions between these species. Figure 2b, c shows that EGFP-actin also concentrates at cell-cell contacts (65% of ncells = 195, red arrow) where it probably interacts with cell-adhesion molecules [41]. We observed actin-enriched structures that protrude from one cell and grasp the dorsal membrane of a neighboring cell (23% of ncells = 195, Fig. 2b, c white asterisks), suggesting that they may contribute to keep cells tight together within the colony. These protrusions also resemble those filopodia involved in the control of changes in cell shape during compaction of early mouse embryos [42]. Furthermore, we observed membrane blebs in some cells located at the colony boundaries (13% of nperipherical cells = 88, Fig. 2c, d, light-blue arrows ); relevantly, the fluorescence intensity of EGFP-actin and therefore the actin concentration at these blebs seems to be lower. In a recent work, super-resolution microscopy showed that cortical actin in fixed ES cells organizes as a low density and isotropic meshwork that does not depend on myosin II activity [43]. In this context, we can hypothesize that some cells apply forces to their neighbors—through those filopodia described above and/or lateral forces—which may release the tension by locally disrupting the sparse meshwork of cortical actin and thus generating a bleb. Similar blebs produced by a breakage of the actin cortex were observed when adherent cells detach from their substrate [44]. Additionally, blebbing increases during the exit from naïve pluripotency, prior to cell spreading in mouse ES cells [45, 46]. In some cases, blebs were accompanied by deformation of the nucleus illustrating how forces applied to ES cells may also shape the nucleus (Fig. 2d). We next focused our attention on vimentin, one of the most studied intermediate filaments in many cell lines due to its key role in diverse cell processes such as migration [47]. Previous evidence also suggests that vimentin is relevant for differentiation of ES cells [12]. Based on immunofluorescence assays, Ginis et al. [38] claimed that this protein was undetectable in mouse ES cells whereas Boraas et al. [13] observed that it is expressed at relatively low levels. These apparently contradictory reports led us to explore vimentin expression by transcriptomic and proteomic data mining (Additional file 7: Supplementary Table S1, [48,49,50,51,52,53,54,55]). The analysis of RNA-seq and microarray data showed that vimentin is expressed at different stages of the developing embryo (Additional file 8: Supplementary Fig. S2a), in ES cells and in other types of stem cells (Additional file 8: Supplementary Fig. S2b). Although vimentin mRNA levels increase during most differentiation processes (Additional file 8: Supplementary Fig. S2c), it is downregulated during epiblast-like cells (EpiLCs) differentiation and, remarkably, it is still detectable after this downregulation (Additional file 8: Supplementary Fig. S2d). We also found that vimentin expression is similar in ES cells and iPS cells and is higher in mouse embryonic fibroblasts (MEFs), which are the corresponding parental differentiated cells (Additional file 8: Supplementary Fig. S2e), agreeing with Boraas et al. [13]. Moreover, RNA-seq and proteomic data analyses revealed that vimentin expression is downregulated during the reprogramming process (Additional file 8: Supplementary Fig. S2f). Altogether, these data demonstrate that vimentin is expressed in pluripotent stem cells. We next analyzed through confocal imaging the distribution of vimentin in live ES cells transfected with a plasmid encoding GFP-vimentin and observed filaments close to or surrounding the cell nucleus (Fig. 3, Additional file 9: Supplementary Video S6 and Additional file 10: Supplementary Video S7). Relevantly, we observed a close association of GFP-vimentin with the nucleus even in those cells presenting relatively low expression levels of the fluorescent protein (Fig. 3a) suggesting that this association is not an aberrant distribution due to overexpression of the fusion protein. Notably, vimentin is organized in knots and ring-like structures around the nucleus in 33% and 37% of the studied cells, respectively (ncells = 70, Fig. 3b). These last structures evoke transient vimentin rings observed during both the initial stages of cell spreading and the detachment that precedes mitosis [56], and thus, we speculate that they might represent a frequent structure in cells with low spreading. Despite the vimentin network cannot generate forces per se, those vimentin-containing rings described before may be involved in nuclear shaping [56] supporting their contribution in transmitting mechanical stimuli to the nucleus. Taken together, the close association between vimentin intermediate filaments and the nucleus of living ES cells suggest that these filaments are involved in mechanical communication to the nucleus. Vimentin intermediate filaments associate with the nucleus of interphase ES cells. Representative 3D images of ES cells expressing H2B-mCherry (red) and GFP-vimentin (green). A Yellow and white arrows point to vimentin knot and ring-like structures, respectively. B Images of the cells expressing relatively low (top panels) or intermediate (bottom panels) levels of GFP-vimentin showing a close association between vimentin structures and the nucleus (Additional file 9: Supplementary Video S6 and Additional file 10: Supplementary Video S7). These images were segmented as described in Methods (middle and right panels) to facilitate their observation (scale bars: 5 μm) Modulation of the nuclear shape of ES cells by the cytoskeletal networks In the previous section, we analyzed the 3D distribution of different cytoskeleton components and examined their organization in relation to the cell nucleus. Several works showed that internal and external forces may affect the nuclear volume and its morphology in a cell-type dependent manner (e.g., [57, 58]). In this sense, the morphology of the nucleus changes in mechanically stressed situations for example, during migration [59] and cell spreading [58]. We next asked if the cytoskeleton components studied in the previous section mechanically communicate with the ES cell nucleus. With this idea, we analyzed the nuclear morphology after disturbing each of these cytoskeletal networks. We highlight that these experiments provide indirect, qualitative information regarding the involvement of different cytoskeletal filaments in mechanotransmission to the nucleus (as defined in [60]) but does not allow the quantification of the mechanical properties of the cytoskeleton of ES cells or the forces applied to the nucleus. For these experiments, we used the YPet-OCT4 ES cell line previously generated by our group [61] that expresses the pluripotency TF OCT4 fused to the fluorescent protein YPet in a docycycline-inducible manner. We have previously shown that this cell line preserves relevant properties of the parental cell line including the morphology of the cells and colonies, normal cell cycle and the expression profile of pluripotency markers [61], and it was also observed that the YPet tag does not affect the subcellular localization of OCT4 [62]. Additionally, the YPet-OCT4 fusion protein is functional since it rescues pluripotency of inducible OCT4 knockout ES cells and presents genome-wide binding profiles similar to those of the endogenous TF [63]. The expression of YPet-OCT4 allows visualizing every nucleus in a colony (Fig. 4a). We segmented nuclei images and quantified their volume and sphericity; this last parameter approaches a value of one when the nucleus becomes more spherical and decreases as the nucleus is deformed. The cytoskeletal networks regulate the nuclear morphology of ES cells. A Representative 3D image of a region of an YPet-OCT4 ES cell colony. The nuclei images were segmented to quantify the volume of each nucleus and its sphericity as described in the "Methods" section. B Quantification of the nuclei volume and sphericity in untreated YPet-OCT4 ES cells (CYPet-OCT4) or YPet-OCT4 ES cells incubated with latrunculin-B (lat), taxol (tax), or vinblastine (vbl). C Similar quantifications performed in W4 ES cells only expressing H2B-mCherry (CW4) or co-expressing H2B-mCherry and the dominant negative vimentin mutant GFP-(vim(1-138)). The data is presented as median ± SE for each experimental condition (nCYPet-OCT4 = 165; nlat = 55; ntax = 58; nvbl = 146; nCW4 = 32; nvim(1-138) = 55). Please, notice that the values measured in CYPet-OCT4 and CW4 conditions could be different due to the different emission spectra of the fluorescent protein used in each case. Asterisks indicate significant differences (p < 0.05) with respect to that obtained for the corresponding control cells. Raw data can be found in Additional file 17: Supplementary Table S2 Depolymerization of actin filaments by treating ES cells with latrunculin-B drastically altered the colony morphology (Additional file 11: Supplementary Fig. S3); ES cells rounded-up and detached from each other as expected from the involvement of actin in cell-cell and cell-substrate adhesions. The abrupt change in cell shape produced by latrunculin-B treatment was accompanied by an increase in the nuclear volume (Fig. 4b). This result could be explained considering that actin produces and/or transmits compressive forces to the nucleus which relaxes after depolymerization of these filaments leading to an increase in its volume. In line with this hypothesis, Kim et al. [57] proposed that actin—and also the microtubule network—compress the nucleus in MEFs. Nevertheless, we should emphasize that this volume change represents a ~ 6% increment in the nucleus radius if we assume the volume to scale with the radius cubed. We also observed that the nuclei sphericity significantly increased after latrunculin-B treatment (Fig. 4b) suggesting a coupling between cell and nuclear morphologies in mouse ES cells. In line with this statement, previous observations in fibroblasts proposed a similar coupling with round cells deriving in round nuclei and well-spread cells resulting in flat nuclei [58]. To study the impact of microtubules, we first depolymerized them using nocodazole, but ES cell colonies detached from the coverslip after the treatment (Additional file 11: Supplementary Fig. S3). Therefore, we followed an alternative procedure and only disturbed the microtubule network instead of depolymerizing it. First, we treated the cells with paclitaxel (also referred to as taxol) that promotes the assembly of these filaments slowing down their dynamical instability. This drug produces a reduction of microtubule stiffness in vitro [64, 65] and eliminates the nuclear wriggling produced by poking microtubules [32]. These previous reports led us to speculate that this drug may also produce a mechanical imbalance of the microtubule network of ES cells. We observed that the morphology of the colony was preserved after taxol treatment (Additional file 11: Supplementary Fig. S3) and neither the nuclear volume nor its sphericity significantly changed after this treatment (Fig. 4b). In addition, we analyzed the effects of vinblastine treatment; low concentrations of this drug stabilize microtubules by capping their plus-ends thus arresting their polymerization and depolymerization dynamics [66]. The morphology of the colony was also preserved after vinblastine treatment (Additional file 11: Supplementary Fig. S3) whereas nuclei slightly increased their volumes (~ 3 % increment in the nucleus radius) but did not significantly change their sphericity (Fig. 4b). Taken together, these results suggest that the microtubule network is not a key player in defining the nuclear shape of ES cells. Finally, we studied the morphology of the nucleus in the parental W4 ES cells transfected with H2B-mCherry and a dominant negative vimentin mutant (vim 1-138) fused to GFP; this fluorescently tagged mutant disrupts vimentin filaments [67]. We should highlight that, in contrast to the relatively fast drug-treatments followed to disrupt microtubules and actin networks, the expression of the mutant vimentin requires a longer period of time. To our knowledge, there are no other methods to selectively disrupt this intermediate filament network. Thus, we cannot rule out that some of the effects observed in these experiments may be indirectly related to the vimentin network disruption. The morphology of the colony was also preserved after transfection of this truncated vimentin (Additional file 11: Supplementary Fig. S3). Figure 4c shows that the nuclear volume increased in those cells expressing the mutant vimentin and the nucleus sphericity was significantly smaller in the transfected cells. These results suggest that, while not being able of generating tension, the vimentin network plays a relevant role protecting the nucleus against forces as was observed in other cell systems [68]. Therefore, we could hypothesize that the intermediate filament network may modulate forces applied to the nucleus in ES cells and consequently, it may also influence gene expression. Actin and intermediate filament networks modulate the dynamical organization of OCT4 We have previously used FCS to quantify the dynamics of TFs in the nucleus of living cells (e.g., [69, 70]). The application of this exquisite technique in ES cells revealed that OCT4-chromatin interactions weaken at the onset of differentiation [61] and uncovered how histone acetyltransferase Kat6b modulates OCT4 and NANOG interactions with chromatin [71]. Therefore, we decided to use a similar approach to explore if the dynamical organization of OCT4 responds to alterations of those cytoskeleton networks that modulate the nuclear sphericity. Figure 5 shows mean, normalized autocorrelation functions (ACF) measured for YPet-OCT4 in control, vimentin-disrupted, or actin-depolymerized ES cells treated as described above. In a previous work, we showed that the ACF data of TFs in the cell nucleus follow Eq. 1 that is derived from a model that includes the diffusion of TF molecules in the nucleus and their interactions with chromatin targets in two distinct temporal windows [70]. The fitting of the experimental data with this equation suggests that OCT4 molecules engaged in long- and short-lived interactions with characteristic times similar to those previously reported [61]. OCT4-chromatin interactions are modulated by the actin and vimentin networks. Single-point FCS measurements were run in YPet-OCT4 ES cells. Mean, normalized ACF obtained at the nucleoplasm of control (gray), vimentin disrupted (A, green), and latrunculin-treated (B, orange) cells. C, D The ACF data were fitted with Eq. 1 to obtain the fractions of free (diffusion), long-lived bound and short-lived bound TF molecules (C) and the characteristic times of long-lived and short-lived interactions of the TF with chromatin (D). The data is presented as mean ± SE for each experimental condition (control: gray bars, n = 17; vim(1-138): green bars, n = 17; latrunculin-B: orange bars, n = 12). Asterisks denote significant differences (p < 0.01) with respect to the control condition. Raw data can be found in Additional file 18: Supplementary Table S3 The analyses also indicate that disruption of either actin or vimentin networks modified the dynamics of the pluripotency TF (Fig. 5c,d). Interestingly, these treatments affected OCT4 dynamics in opposed ways. Particularly, vimentin disruption promoted the detachment of OCT4 from long-lived chromatin targets with a parallel increase in the proportion of freely diffusing TF molecules. We mentioned before that OCT4-chromatin interactions weaken at the onset of differentiation [61]. In this context, we hypothesize that the intermediate filaments network protects the nucleus from mechanical stimuli thus contributing to the maintenance of OCT4-chromatin interactions and therefore, the pluripotent state. Contrary, actin disruption triggered the attachment of OCT4 molecules to long-lived sites. Therefore, we hypothesize that the actin network communicates mechanical signals to the nucleus affecting the interactions of OCT4 with chromatin targets and probably modifying, in a longer time window, the gene expression profile leading to the exit of the pluripotent state. On the other hand, the arrest of microtubule polymerization/depolymerization through vinblastine treatment did not affect OCT4 dynamics (Additional file 12: Supplementary Fig. S4) supporting that the microtubule network does not modulate the morphology of the nucleus neither affects OCT4-chromatin interactions. Taken together, our results suggest that the actin and vimentin filaments networks modulate the landscape of chromatin interactions of the pluripotency TF OCT4 and may ultimately impact on the preservation of the pluripotent state. Mechanical forces regulate many aspects of cell function [3, 60]. One of the key components involved in the intracellular mechanical communication is the cytoskeleton, an interconnected network of structurally different biopolymers and crosslinking molecules [72]. Forces applied to cells could also be transmitted to the nucleus affecting a variety of nuclear properties and functions including chromatin organization and transcriptional regulation [73,74,75,76,77,78]. Extensive evidence shows that forces can define cell fate and guide developmental processes [6, 7]. The mechanical interplay between the cytoskeleton and the nucleus has been deeply studied in somatic cells; however, relevant aspects of this communication in ES cells remain elusive. This void in the field is probably due to the fact that most studies describing the cytoskeleton relied on single-plane images of fixed specimens. In this work, we used non-invasive fluorescence microscopy methods to study the three-dimensional organization of the cytoskeleton in live ES cells and analyzed its influence in the nuclear morphology to explore if certain cytoskeletal components are involved in the transmission of mechanical signals to this organelle. Also, we analyzed if this communication impacts on the dynamical interactions of the pluripotency TF OCT4 with the chromatin. Our imaging experiments in live ES cells revealed that, contrary to the observations in fixed specimens, microtubules present a complex organization extending throughout the cytoplasm. Time-lapse imaging of EB3-GFP comets highlights the dynamic behavior of the microtubule network and suggests the presence of MTOCs. Microtubules also localize in protrusions that extend to other cells that may be involved in keeping cells together within the colony and/or in cell-cell communication [79]; further, research is necessary to firmly assess their biological function. We also found that mechanical imbalances in the microtubule network caused by taxol or vinblastine did not significantly affect the nuclear sphericity. This result, combined with the observation through time-lapse imaging of EB3-GFP comets of microtubules growing in contact and even poking the nucleus, suggests that these biopolymers are not involved in modulating its shape; further experiments are required to test their involvement in the rotation and positioning of the nucleus. Some previous works also described the relevance of microtubules in the nuclear properties but, differently from our study in naïve ES cells, most of these works focus on differentiation processes of multipotent stem cells. For example, differentiation of human adipose-derived stem cells requires a crosstalk between perinuclear microtubules and the LINC complex, and its disruption impairs adipogenesis [77]. Also, microtubules modulate the nucleus shape and affect heterochromatin distribution impacting in human hematopoietic stem cells differentiation. Invaginations generated by microtubules define the distinctive nuclear shape of myeloid progenitors which seem to be relevant to establish the genetic program that identifies this specific cell lineage [76]. A recent report in ES cells [80] reveals microtubule-enriched cytoplasmic bridges that link sister cells for a long time after cell division and shows that the exit from naïve pluripotency requires the abscission of this bridge. We did not observe these bridges in our experiments probably due to the combination of the relatively low proportion of transfected cells and the transient characteristic of these structures [80]. We also analyzed the 3D distribution of actin and found that it concentrates at cell-cell boundaries and cell-substrate contacts as expected from its role in both cell-cell junctions and cell attachment to the substrate [34]. It was recently described that cortical actin in fixed ES cells is organized as an isotropic meshwork [43]. Despite confocal microscopy does not allow resolving this meshwork, our observations provide information on other features of actin organization in living ES cells. Mouse ES cells did not present the typical actin stress fibers observed in many somatic cell lines nor the actin fence described in human ES cells [36] as expected from their different mechanical properties and interactions with the substrate. Particularly, human pluripotent stem cell colonies are bigger and flatter than those of mouse pluripotent stem cells although in both cases the colonies are composed of tightly compact cells [38,39,40]. In addition, the composition of the substrate required for feeder-free culture of these cell types is different; mouse pluripotent stem cells can grow on gelatin-coated plates whereas human pluripotent stem cells require more complex coatings such as Matrigel or Geltrex [39, 40] with different mechanical properties [81, 82]. We also observed filopodia-like structures projecting from cells to their closest neighbors; these filopodia resemble those observed in early mouse embryos and required for compaction [42] suggesting that they might be also involved in keeping ES cells tight together in the colony. Interestingly, ES cells nuclei increased their sphericity upon actin depolymerization, accompanying the loss of cell-cell junctions and cell's rounding-up. These results suggest that actin is involved in the mechanical coupling between cell and nuclear shapes agreeing with the proposed role of these filaments in strain transmission to the nucleus of ES cells [83], mesenchymal stem cells [84] and endothelial cells [85] among other cell types [2, 86]. Finally, we explored the distribution of vimentin, an intermediate filament protein that remained poorly explored in undifferentiated ES cells since it is expressed at relatively low levels. Relevantly, our analyses of transcriptomic and proteomic data revealed that both vimentin mRNA and protein are detected in these cells. It is widely accepted that intermediate filaments, the softest component of the cytoskeleton [87], passively contribute to the cell stiffness and protect the nucleus in mechanically stressed situations in somatic cells [68]. These filaments withstand significantly greater mechanical deformation than actin and microtubules [88] with an elastic modulus that increases at large strains [89] and form bundles of increased rigidity in cells [90, 91]. In contrast to microtubules and actin filaments, intermediate filaments do not constitute the tracks of molecular motors and cannot produce and/or respond to external forces by polymerization/depolymerization [87]. Interestingly, recent evidence pointed to more active roles of the vimentin intermediate filament network in the mechanical properties of somatic cells [90,91,92]. Vimentin has been extensively studied in many other systems due to its role in cell migration associated with both embryogenesis and cancer invasiveness [93, 94]; however, it has been poorly studied in the context of pluripotency. Vast evidence also highlights the involvement of vimentin in multiple differentiation processes since its expression increases during the epithelial-mesenchymal transition [95]. Additionally, it is downregulated during reprogramming to induced pluripotent stem cells generation [13]. Moreover, the absence of vimentin impairs spontaneous in vitro differentiation of ES cells to the endothelial phenotype [12]. A recent report also suggests that vimentin intervenes in the stress response of differentiating cells [96]. We found that vimentin concentrates around the nucleus and forms knots and ring-like structures in mouse ES cells that resemble those observed during processes involving loosely attached cells, i.e., during the initial steps of cell spreading and the detachment step that precedes mitosis [56]. Similar ring-like structures formed by intermediate filaments were proposed to cause nuclear invagination in diverse cell lines [97, 98]. Despite the functional roles of vimentin-structures associated to the nucleus in ES cells remain elusive, we hypothesize that they may interact with other active components of the cytoskeleton as already observed in other cell lines [99,100,101] modulating the mechanical stimuli applied to the nucleus and consequently protecting it from mechanical stress. In this line, we found that disruption of the vimentin network by expression of a dominant negative vimentin mutant increases the nuclear deformation. Relevantly, this observation also brings in the idea of a role of these intermediate filaments in the transmission of mechanical signals to the nucleus of ES cells. It is important to emphasize that we run our assays with cells growing onto coverslips, condition widely used in the literature to explore a variety of properties of ES cells. However, many properties of stem cells in 2D and 3D are different [102,103,104] including the architecture of the cytoskeleton [105] and, even in 2D, the particular characteristics of the substrate influence the behavior of the ES cells [8]. Thus, the observations performed in our experimental conditions cannot be directly extrapolated to other experimental conditions neither to the in vivo context of the embryo. Finally, we analyzed if those cytoskeleton components that modulate the nuclear shape also triggered changes in other properties of ES cells that may ultimately impact on gene expression and pluripotency maintenance. Specifically, we studied the dynamics of the pluripotency TF OCT4 through FCS, a technique that provides exquisite information on TFs organization both in single cells and in whole organisms [61, 69,70,71, 106]. Here, we showed that disruption of either the actin or vimentin networks impact on the dynamical organization of OCT4 whereas the alteration of the microtubule network did not affect the dynamics of this pluripotency TF. Vimentin disruption induced the detachment of this TF from long-lived chromatin sites with a parallel increase in the relative amount of diffusing OCT4 molecules. In stark contrast, actin depolymerization triggered the binding of OCT4 to long-lived sites with a concomitant reduction of the proportion of TF molecules undergoing diffusion. Altogether, these observations suggest that the cytoskeleton contribute to modulate the nuclear shape and also modify the landscape of OCT4-chromatin interactions. We have previously reported that OCT4 detaches from chromatin sites at early stages of differentiation preceding its downregulation [61]. In this context, we could hypothesize that the vimentin network protects the nucleus from deformations and contributes with the preservation of the pluripotent state of mouse ES cells. Thus, our results strongly suggest that vimentin may have a pro-stemness role in pluripotent stem cells. In line with this hypothesis, previous reports correlate high vimentin expression with restriction of differentiation during development and cancer [107,108,109,110,111]. Also, the reduction of vimentin levels at early stages of mammalian erythroid cell differentiation seems to be critical for enucleation [112], stressing the relevance of the nucleus-protecting function of vimentin. On the other hand, our results also highlight the role of actin in modulating the shape of the nucleus that could indirectly guide differentiation. Particularly, we observed that actin depolymerization increased the sphericity of the nucleus and promoted OCT4 binding to chromatin favoring the preservation of the pluripotent state. These results are in line with previous observations showing that weak interactions with the substrate and actin network disruption preserve ES cells pluripotency [83, 113]. In conclusion, our results provide new insights to dissect how the communication between the cytoskeleton and the nucleus of ES cells may impact on pluripotency maintenance and differentiation. In this work, we examined the 3D organization of the cytoskeleton of live naïve ES cells and showed how certain cytoskeletal components affect the nuclear shape. We also found that those cytoskeletal components involved in shaping the nucleus (i.e., actin and vimentin intermediate filaments), also modulate the dynamical organization of the pluripotency transcription factor OCT4. Our data suggest that vimentin protects the nucleus and contributes to maintain OCT4-chromatin interactions thus; it may have a pro-stemness function in ES cells. On the other hand, actin seems to play the opposed role since it contributes to deform the nucleus and triggers the detachment of OCT4 from chromatin sites. Taken together, our results support a relevant role of the cytoskeleton in communicating signals to the nucleus of ES cells, influencing the landscape of interactions of the transcription factor OCT4 with chromatin and most probably affecting pluripotency and cell fate. Mouse ES cells were cultured in a medium composed of DMEM (Gibco), 2 mM Glutamax (Gibco), 100 mM MEM nonessential amino acids (Gibco), 0.1 mM 2-mercaptoethanol, 100 U/ml penicillin, and 100 mg/ml streptomycin (Gibco), supplemented with 15% FBS (Gibco), LIF and 2i (1 μM PD0325901 and 3 μM CHIR99021, Tocris). The use of these inhibitors allows culturing ES cells preserving naïve pluripotency [114]. Cells were maintained on 0.1% gelatin coated dishes at 37 °C in a 5 % CO2 (v/v) incubator and passaged every 3 days using trypsin (Gibco) and routinely assessed for mycoplasma contamination by genomic DNA extraction and PCR analysis. The experiments were performed using two cell lines: the mouse ES cell line W4 provided by the Rockefeller University Core Facility and the YPet-OCT4 ES cell line, previously generated in our laboratory from the same W4 cell line [61]. The YPet-OCT4 cell line expresses the TF OCT4 fused to the fluorescent protein YPet in a doxycycline-inducible manner. Cells were incubated with 5 μg/ml doxycycline for 48 h prior to imaging experiments. Plasmids and transfection ES cells were plated for 24 h onto 18-mm round coverslips previously treated with 100 μg/ml PDL (Sigma-Aldrich) and 20 μg/ml Laminin (Invitrogen) which were placed into the wells of a 12-multiwell plate in 800 μl of complete medium. Transient transfection was carried out using Lipofectamine 2000 (Thermo Fisher) and 1.6 μg of plasmid DNA in Opti-MEM medium (Thermo Fisher). The transfection medium was replaced by fresh culture medium 6 h after transfection and microscopy observations were performed 48 h after transfection. The plasmids were GFP-tagged full-length vimentin and the dominant-negative construct containing the head and alpha-helical domain 1A of vimentin [mCherry-vim(1-138)] generated from the GFP-vim(1-138) plasmid [67] that was provided by Dr. Vladimir I Gelfand (Northwestern University, Chicago, IL); EMTB-3xGFP [17], which codifies the microtubule-binding domain of ensconsin fused to a tandem of 3 copies of GFP (Addgene # 26741) and EB3-GFP which binds to the plus-end of growing microtubules [115] were gifts from Dr. Arpita Upadhyaya (University of Maryland, College Park, MD); PGK-H2B-mCherry was a gift from Mark Mercola (Addgene plasmid # 21217; http://n2t.net/addgene:21217 ; RRID:Addgene_21217) [116] and pEGFP-actin [117] kindly provided by Dr. Nicolás Plachta (Institute of Molecular and Cell Biology, ASTAR, Singapore). Sample preparation for imaging For microscopy measurements, ES cells were plated onto 18-mm round coverslips coated with PDL and laminin as described above. Before observation, the coverslips were mounted in a custom-made chamber specially designed for the microscope. Cells were incubated with 10 μM latrunculin-B (Sigma-Aldrich) at 37 °C for 15 min or 10 μM nocodazole at 0 °C for 30 min to promote actin and microtubule depolymerization, respectively. To perturb microtubule dynamics, cells were incubated at 37 °C with 30 nM paclitaxel for 4 h or 30 nM vinblastine sulfate (Sigma-Aldrich) for 10 min. Confocal images were acquired in FV1000 Olympus confocal microscopes (Olympus Inc., Japan). GFP, EGFP, YPet, and mCherry fusion proteins were observed using a multi-line Ar laser tuned at 488 nm and a solid diode laser of 543 nm as excitation sources, respectively. The average power at the sample was ~ 1 μW. The laser light was reflected by a dichroic mirror (DM 405/488/543/635) and focused through an Olympus UPlanSApo 60X oil immersion objective (NA = 1.35) onto the sample. Fluorescence was collected by the same objective and split into two channels set to collect photons in the range 500–525 nm (GFP, EGFP and YPet) and 650–750 nm (mCherry). Fluorescence was detected with photomultipliers set in the photon-counting detection mode. Tracking of EB3 comets We used the Trackmate plugin [118] of Fiji ImageJ (NIH, USA) to track EB3 comets; the images stacks were preprocessed using the despeckle filter of the same program. These data were exported to Icy [119] to obtain the flow map. 3D image analyses Z–stack images were preprocessed using median and ROF filters in ImageJ (NIH, USA) and analyzed using the automatic surface rendering mode of the software Imaris (Bitplane) that was also used to calculate the morphological descriptors sphericity and volume of cell nuclei. Examples of nuclei segmentation can be found in Additional file 13: Supplementary Fig S5. Single-point FCS measurements were performed in the Olympus FV1000 confocal microscope set in the photon-counting mode. The laser was focused at a position in a cell nucleus selected by the user and the intensity was collected at 50 MHz during ~ 3 min. Single experiment was performed in each cell to minimize its photodamage. ACF data were calculated using SimFCS program (LFD, Irvine, CA, USA) and were fitted with Eq. 1 that considers the diffusion of the TFs and their binding to two populations of fixed sites [70]: $$ \mathrm{G}\left(\uptau \right)=\frac{1}{2^{3/2}N}\left[{f}_{\mathrm{D}}{\left(1+\frac{\tau }{\tau_{\mathrm{D}}}\right)}^{-1}{\left(1+\frac{\tau }{\omega^2{\tau}_{\mathrm{D}}}\right)}^{-1/2}+{f}_{\mathrm{short}}{e}^{\raisebox{1ex}{$-\tau $}\!\left/ \!\raisebox{-1ex}{${\tau}_{\mathrm{short}}$}\right.}+{f}_{\mathrm{long}}{e}^{\raisebox{1ex}{$-\tau $}\!\left/ \!\raisebox{-1ex}{${\tau}_{\mathrm{long}}$}\right.}\right] $$ where N is the mean number of fluorescent molecules in the confocal volume, τD is the characteristic diffusion time, ω is the ratio between axial and radial waists of the observation volume, and fD is the freely diffusing population fraction. fshort and flong are the population fractions bound to short-lived and long-lived targets, and τshort and τlong are their residence times, respectively. The reciprocal of the residence time corresponds to the dissociation constant koff. Bioinformatics analysis Vimentin gene expression analysis was performed on transcriptomic and proteomic data-mining platform, Stemformatics web tool (https://www.stemformatics.org, [120]), using publicly available datasets (Additional file 7: Supplementary Table S1, [48,49,50,51,52,53,54,55]) stored in Gene Expression Omnibus (GEO, ncbi), Sequence Read Archive (ncbi), GnomEx (Utah) and ProteomeXchange. Data normalization, transformation and annotation methods are available at Stemformatics documentation (https://www.stemformatics.org/Stemformatics_data_methods.pdf). All the results shown in this work were obtained from experiments replicated at least 3 times. Nuclear volume and sphericity were expressed as median ± SE. To compare the median values (med) of different data sets, we used a hypothesis test computing the p-values as follows [121]: $$ p-\mathrm{value}=2\left[1-F\left(\frac{\left|{\mathrm{med}}_{\left(\mathrm{g}1\right)}-{\mathrm{med}}_{\left(\mathrm{g}2\right)}\right|}{\sqrt{\upsigma_{\left(\mathrm{g}1\right)}^2+{\upsigma}_{\left(\mathrm{g}2\right)}^2}}\right)\right] $$ where F is the standard normal distribution and σ2 (g1) and σ2 (g2) represent the variance of each data group. Differences were regarded as significant at p < 0.05. The parameters' standard errors (SE) and variance were computed by a bootstrap procedure [122]. Experimental results obtained for OCT4 dynamics were expressed as mean ± SEM. Statistical significance between groups was analyzed using linear mixed models (LMM) followed by comparisons between means using the Dunett test, when required. Differences were regarded as significant at p ≤ 0.01. Statistical data analysis was performed using the R software. All data generated or analyzed during this study are included in this published article, its supplementary information files, and publicly available repositories. 3D images of ES cells are available in Figshare: EB3-GFP and H2B-mCherry: https://figshare.com/s/4df2870709faf1618ae8 [123]. EMTB-3GFP and H2B-mCherry: https://figshare.com/s/828fcb4d80ddead23574 [124]. Actin-GFP and H2B-mCherry: https://figshare.com/s/2dfcf066f4a10839ad47 [125]. Vim-GFP and H2B-mCherry: https://figshare.com/s/74edbb5a93ae74caa9e8 [126]. ACF: Autocorrelation functions EB3: Microtubule end-binding protein EGFP: Enhanced green fluorescent protein EMTB: Ensconsin microtubule-binding domain EpiLCs: Epiblast-like cells ES cells: FCS: Fluorescence correlation spectroscopy GFP: iPS cells: Latrunculin-B LINC: Linker of nucleoskeleton and cytoskeleton mCherry: Monomeric red fluorescent protein MEFs: Mouse embryonic fibroblasts MTOCs: Microtubule-organizing centers RNA-seq: Taxol TF: vim(1-138): Dominant negative vimentin mutant vbl: Vinblastine YPet: Starr DA, Fridolfsson HN. Interactions between nuclei and the cytoskeleton are mediated by SUN-KASH nuclear-envelope bridges. Annu Rev Cell Dev Biol. 2010;26(1):421–44. https://doi.org/10.1146/annurev-cellbio-100109-104037. Fedorchak GR, Kaminski A, Lammerding J. Cellular mechanosensing: getting to the nucleus of it all. Prog Biophys Mol Biol. 2014;115(2-3):76–92. https://doi.org/10.1016/j.pbiomolbio.2014.06.009. Martino F, Perestrelo AR, Vinarsky V, Pagliari S, Forte G. Cellular mechanotransduction: from tension to function. Front Physiol. 2018;9:824. https://doi.org/10.3389/fphys.2018.00824. Gundersen GG, Worman HJ. Nuclear positioning. Cell. 2013;152(6):1376–89. https://doi.org/10.1016/j.cell.2013.02.031. Miroshnikova YA, Nava MM, Wickstrom SA. Emerging roles of mechanical forces in chromatin regulation. J Cell Sci. 2017;130(14):2243–50. https://doi.org/10.1242/jcs.202192. Vining KH, Mooney DJ. Mechanical forces direct stem cell behaviour in development and regeneration. Nat Rev Mol Cell Biol. 2017;18(12):728–42. https://doi.org/10.1038/nrm.2017.108. Heo SJ, Cosgrove BD, Dai EN, Mauck RL. Mechano-adaptation of the stem cell nucleus. Nucleus. 2018;9(1):9–19. https://doi.org/10.1080/19491034.2017.1371398. Engler AJ, Sen S, Sweeney HL, Discher DE. Matrix elasticity directs stem cell lineage specification. Cell. 2006;126(4):677–89. https://doi.org/10.1016/j.cell.2006.06.044. Murphy WL, McDevitt TC, Engler AJ. Materials as stem cell regulators. Nat Mater. 2014;13(6):547–57. https://doi.org/10.1038/nmat3937. Chen L, Hu H, Qiu W, Shi K, Kassem M. Actin depolymerization enhances adipogenic differentiation in human stromal stem cells. Stem Cell Res. 2018;29:76–83. https://doi.org/10.1016/j.scr.2018.03.010. Boraas LC, Pineda ET, Ahsan T. Actin and myosin II modulate differentiation of pluripotent stem cells. PLoS One. 2018;13(4):e0195588. https://doi.org/10.1371/journal.pone.0195588. Boraas LC, Ahsan T. Lack of vimentin impairs endothelial differentiation of embryonic stem cells. Sci Rep. 2016;6(1):30814. https://doi.org/10.1038/srep30814. Boraas LC, Guidry JB, Pineda ET, Ahsan T. Cytoskeletal expression and remodeling in pluripotent stem cells. PLoS One. 2016;11(1):e0145084. https://doi.org/10.1371/journal.pone.0145084. Li Y, Almassalha LM, Chandler JE, Zhou X, Stypula-Cyrus YE, Hujsak KA, et al. The effects of chemical fixation on the cellular nanostructure. Exp Cell Res. 2017;358(2):253–9. https://doi.org/10.1016/j.yexcr.2017.06.022. Danchenko M, Csaderova L, Fournier PE, Sekeyova Z. Optimized fixation of actin filaments for improved indirect immunofluorescence staining of rickettsiae. BMC Res Notes. 2019;12(1):657. https://doi.org/10.1186/s13104-019-4699-9. Loh Y-H, Wu Q, Chew J-L, Vega VB, Zhang W, Chen X, et al. The Oct4 and Nanog transcription network regulates pluripotency in mouse embryonic stem cells. Nat Genet. 2006;38(4):431–40. https://doi.org/10.1038/ng1760. Faire K, Waterman-Storer CM, Gruber D, Masson D, Salmon ED, Bulinski JC. E-MAP-115 (ensconsin) associates dynamically with microtubules in vivo and is not a physiological modulator of microtubule dynamics. J Cell Sci. 1999;112(Pt 23):4243–55. https://doi.org/10.1242/jcs.112.23.4243. Pallavicini C, Levi V, Wetzler DE, Angiolini JF, Bensenor L, Desposito MA, et al. Lateral motion and bending of microtubules studied with a new single-filament tracking routine in living cells. Biophys J. 2014;106(12):2625–35. https://doi.org/10.1016/j.bpj.2014.04.046. Zenker J, White MD, Templin RM, Parton RG, Thorn-Seshold O, Bissiere S, et al. A microtubule-organizing center directing intracellular transport in the early mouse embryo. Science. 2017;357(6354):925–8. https://doi.org/10.1126/science.aam9335. Talwar S, Kumar A, Rao M, Menon GI, Shivashankar GV. Correlated spatio-temporal fluctuations in chromatin compaction states characterize stem cells. Biophys J. 2013;104(3):553–64. https://doi.org/10.1016/j.bpj.2012.12.033. Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P. The cytoskeleton. In: Molecular biology of the cell. 4th ed. New York: Garland Science; 2002. Baharvand H, Matthaei KI. The ultrastructure of mouse embryonic stem cells. Reprod Biomed Online. 2003;7(3):330–5. https://doi.org/10.1016/S1472-6483(10)61873-1. Zwetsloot AJ, Tut G, Straube A. Measuring microtubule dynamics. Essays Biochem. 2018;62(6):725–35. https://doi.org/10.1042/EBC20180035. Mustyatsa VV, Kostarev AV, Tvorogova AV, Ataullakhanov FI, Gudimchuk NB, Vorobjev IA. Fine structure and dynamics of EB3 binding zones on microtubules in fibroblast cells. Mol Biol Cell. 2019;30(17):2105–14. https://doi.org/10.1091/mbc.E18-11-0723. Gueth-Hallonet C, Antony C, Aghion J, Santa-Maria A, Lajoie-Mazenc I, Wright M, et al. gamma-Tubulin is present in acentriolar MTOCs during early mouse development. J Cell Sci. 1993;105(Pt 1):157–66. https://doi.org/10.1242/jcs.105.1.157. Sanchez AD, Feldman JL. Microtubule-organizing centers: from the centrosome to non-centrosomal sites. Curr Opin Cell Biol. 2017;44:93–101. https://doi.org/10.1016/j.ceb.2016.09.003. Manandhar G, Schatten H, Sutovsky P. Centrosome reduction during gametogenesis and its significance. Biol Reprod. 2005;72(1):2–13. https://doi.org/10.1095/biolreprod.104.031245. Schuh M, Ellenberg J. Self-organization of MTOCs replaces centrosome function during acentrosomal spindle assembly in live mouse oocytes. Cell. 2007;130(3):484–98. https://doi.org/10.1016/j.cell.2007.06.025. Dupin I, Etienne-Manneville S. Nuclear positioning: mechanisms and functions. Int J Biochem Cell Biol. 2011;43(12):1698–707. https://doi.org/10.1016/j.biocel.2011.09.004. Tolic-Norrelykke IM. Push-me-pull-you: how microtubules organize the cell interior. Eur Biophys J. 2008;37(7):1271–8. https://doi.org/10.1007/s00249-008-0321-0. Reinsch S, Gonczy P. Mechanisms of nuclear positioning. J Cell Sci. 1998;111(Pt 16):2283–95. https://doi.org/10.1242/jcs.111.16.2283. Szikora S, Gaspar I, Szabad J. 'Poking' microtubules bring about nuclear wriggling to position nuclei. J Cell Sci. 2013;126(Pt 1):254–62. https://doi.org/10.1242/jcs.114355. Svitkina T. The actin cytoskeleton and actin-based motility. Cold Spring Harb Perspect Biol. 2018;10(1):a018267. https://doi.org/10.1101/cshperspect.a018267. Bachir AI, Horwitz AR, Nelson WJ, Bianchini JM. Actin-based adhesion modules mediate cell interactions with the extracellular matrix and neighboring cells. Cold Spring Harb Perspect Biol. 2017;9(7):a023234. https://doi.org/10.1101/cshperspect.a023234. Rosowski KA, Mertz AF, Norcross S, Dufresne ER, Horsley V. Edges of human embryonic stem cell colonies display distinct mechanical properties and differentiation potential. Sci Rep. 2015;5(1):14218. https://doi.org/10.1038/srep14218. Narva E, Stubb A, Guzman C, Blomqvist M, Balboa D, Lerche M, et al. A strong contractile actin fence and large adhesions direct human pluripotent colony morphology and adhesion. Stem Cell Rep. 2017;9(1):67–76. https://doi.org/10.1016/j.stemcr.2017.05.021. Chowdhury F, Li Y, Poh YC, Yokohama-Tamaki T, Wang N, Tanaka TS. Soft substrates promote homogeneous self-renewal of embryonic stem cells via downregulating cell-matrix tractions. PLoS One. 2010;5(12):e15655. https://doi.org/10.1371/journal.pone.0015655. Ginis I, Luo Y, Miura T, Thies S, Brandenberger R, Gerecht-Nir S, et al. Differences between human and mouse embryonic stem cells. Dev Biol. 2004;269(2):360–80. https://doi.org/10.1016/j.ydbio.2003.12.034. Lanza R, Atala A. Handbook of stem cells. 2nd ed; 2013. Lin S, Talbot P. Methods for culturing mouse and human embryonic stem cells. Methods Mol Biol. 2011;690:31–56. https://doi.org/10.1007/978-1-60761-962-8_2. Li L, Bennett SA, Wang L. Role of E-cadherin and other cell adhesion molecules in survival and differentiation of human pluripotent stem cells. Cell Adhes Migr. 2012;6(1):59–70. https://doi.org/10.4161/cam.19583. Fierro-Gonzalez JC, White MD, Silva JC, Plachta N. Cadherin-dependent filopodia control preimplantation embryo compaction. Nat Cell Biol. 2013;15(12):1424–33. https://doi.org/10.1038/ncb2875. Xia S, Lim YB, Zhang Z, Wang Y, Zhang S, Lim CT, et al. Nanoscale architecture of the cortical actin cytoskeleton in embryonic stem cells. Cell Rep. 2019;28(5):1251–67.e1257. https://doi.org/10.1016/j.celrep.2019.06.089. Paluch E, Piel M, Prost J, Bornens M, Sykes C. Cortical actomyosin breakage triggers shape oscillations in cells and cell fragments. Biophys J. 2005;89(1):724–33. https://doi.org/10.1529/biophysj.105.060590. De Belly H, Stubb A, Yanagida A, et al. Membrane Tension Gates ERK-Mediated Regulation of Pluripotent Cell Fate. Cell Stem Cell. 2021;28(2):273-284.e6. https://doi.org/10.1016/j.stem.2020.10.018. Bergert M, Lembo S, Sharma S, et al. Cell Surface Mechanics Gate Embryonic Stem Cell Differentiation. Cell Stem Cell. 2021;28(2):209-216.e4. https://doi.org/10.1016/j.stem.2020.10.017. Etienne-Manneville S. Cytoplasmic intermediate filaments in cell biology. Annu Rev Cell Dev Biol. 2018;34(1):1–28. https://doi.org/10.1146/annurev-cellbio-100617-062534. Benevento M, Tonge PD, Puri MC, Hussein SM, Cloonan N, Wood DL, et al. Proteome adaptation in cell reprogramming proceeds via distinct transcriptional networks. Nat Commun. 2014;5(1):5613. https://doi.org/10.1038/ncomms6613. Hayashi K, Ohta H, Kurimoto K, Aramaki S, Saitou M. Reconstitution of the mouse germ cell specification pathway in culture by pluripotent stem cells. Cell. 2011;146(4):519–32. https://doi.org/10.1016/j.cell.2011.06.052. Hussein SM, Puri MC, Tonge PD, Benevento M, Corso AJ, Clancy JL, et al. Genome-wide characterization of the routes to pluripotency. Nature. 2014;516(7530):198–206. https://doi.org/10.1038/nature14046. Jameson SA, Natarajan A, Cool J, DeFalco T, Maatouk DM, Mork L, et al. Temporal transcriptional profiling of somatic and germ cells reveals biased lineage priming of sexual fate in the fetal mouse gonad. PLoS Genet. 2012;8(3):e1002575. https://doi.org/10.1371/journal.pgen.1002575. Kojima Y, Kaufman-Francis K, Studdert JB, Steiner KA, Power MD, Loebel DA, et al. The transcriptional and functional properties of mouse epiblast stem cells resemble the anterior primitive streak. Cell Stem Cell. 2014;14(1):107–20. https://doi.org/10.1016/j.stem.2013.09.014. Lodato MA, Ng CW, Wamstad JA, Cheng AW, Thai KK, Fraenkel E, et al. SOX2 co-occupies distal enhancer elements with distinct POU factors in ESCs and NPCs to specify cell state. PLoS Genet. 2013;9(2):e1003288. https://doi.org/10.1371/journal.pgen.1003288. Ulloa-Montoya F, Kidder BL, Pauwelyn KA, Chase LG, Luttun A, Crabbe A, et al. Comparative transcriptome analysis of embryonic and adult stem cells with extended and limited differentiation capacity. Genome Biol. 2007;8(8):R163. https://doi.org/10.1186/gb-2007-8-8-r163. Wamstad JA, Alexander JM, Truty RM, Shrikumar A, Li F, Eilertson KE, et al. Dynamic and coordinated epigenetic regulation of developmental transitions in the cardiac lineage. Cell. 2012;151(1):206–20. https://doi.org/10.1016/j.cell.2012.07.035. Terriac E, Schutz S, Lautenschlager F. Vimentin intermediate filament rings deform the nucleus during the first steps of adhesion. Front Cell Dev Biol. 2019;7:106. https://doi.org/10.3389/fcell.2019.00106. Kim DH, Li B, Si F, Phillip JM, Wirtz D, Sun SX. Volume regulation and shape bifurcation in the cell nucleus. J Cell Sci. 2015;128(18):3375–85. https://doi.org/10.1242/jcs.166330. Li Y, Lovett D, Zhang Q, Neelam S, Kuchibhotla RA, Zhu R, et al. Moving cell boundaries drive nuclear shaping during cell spreading. Biophys J. 2015;109(4):670–86. https://doi.org/10.1016/j.bpj.2015.07.006. Katiyar A, Tocco VJ, Li Y, Aggarwal V, Tamashunas AC, Dickinson RB, et al. Nuclear size changes caused by local motion of cell boundaries unfold the nuclear lamina and dilate chromatin and intranuclear bodies. Soft Matter. 2019;15(45):9310–7. https://doi.org/10.1039/C9SM01666J. Maurer M, Lammerding J. The driving force: nuclear mechanotransduction in cellular function, fate, and disease. Annu Rev Biomed Eng. 2019;21(1):443–68. https://doi.org/10.1146/annurev-bioeng-060418-052139. Verneri P, Vazquez Echegaray C, Oses C, Stortz M, Guberman A, Levi V. Dynamical reorganization of the pluripotency transcription factors Oct4 and Sox2 during early differentiation of embryonic stem cells. Sci Rep. 2020;10(1):5195. https://doi.org/10.1038/s41598-020-62235-0. Deluz C, Friman ET, Strebinger D, Benke A, Raccaud M, Callegari A, et al. A role for mitotic bookmarking of SOX2 in pluripotency and differentiation. Genes Dev. 2016;30(22):2538–50. https://doi.org/10.1101/gad.289256.116. Strebinger D, Deluz C, Friman ET, Govindan S, Alber AB, Suter DM. Endogenous fluctuations of OCT4 and SOX2 bias pluripotent cell fate decisions. Mol Syst Biol. 2019;15(9):e9002. https://doi.org/10.15252/msb.20199002. Felgner H, Frank R, Schliwa M. Flexural rigidity of microtubules measured with the use of optical tweezers. J Cell Sci. 1996;109(Pt 2):509–16. https://doi.org/10.1242/jcs.109.2.509. Kikumoto M, Kurachi M, Tosa V, Tashiro H. Flexural rigidity of individual microtubules measured by a buckling force with optical traps. Biophys J. 2006;90(5):1687–96. https://doi.org/10.1529/biophysj.104.055483. Dhamodharan R, Jordan MA, Thrower D, Wilson L, Wadsworth P. Vinblastine suppresses dynamics of individual microtubules in living interphase cells. Mol Biol Cell. 1995;6(9):1215–29. https://doi.org/10.1091/mbc.6.9.1215. Chang L, Barlan K, Chou YH, Grin B, Lakonishok M, Serpinskaya AS, et al. The dynamic properties of intermediate filaments during organelle transport. J Cell Sci. 2009;122(Pt 16):2914–23. https://doi.org/10.1242/jcs.046789. Patteson AE, Vahabikashi A, Pogoda K, Adam SA, Mandal K, Kittisopikul M, et al. Vimentin protects cells against nuclear rupture and DNA damage during migration. J Cell Biol. 2019;218(12):4079–92. https://doi.org/10.1083/jcb.201902046. Stortz M, Presman DM, Bruno L, Annibale P, Dansey MV, Burton G, et al. Mapping the dynamics of the glucocorticoid receptor within the nuclear landscape. Sci Rep. 2017;7(1):6219. https://doi.org/10.1038/s41598-017-06676-0. White MD, Angiolini JF, Alvarez YD, Kaur G, Zhao ZW, Mocskos E, et al. Long-lived binding of Sox2 to DNA predicts cell fate in the four-cell mouse embryo. Cell. 2016;165(1):75–87. https://doi.org/10.1016/j.cell.2016.02.032. Cosentino MS, Oses C, Vázquez Echegaray C, Solari C, Waisman A, Álvarez Y, et al. Kat6b modulates Oct4 and Nanog binding to chromatin in embryonic stem cells and is required for efficient neural differentiation. J Mol Biol. 2019;431(6):1148–59. https://doi.org/10.1016/j.jmb.2019.02.012. Fletcher DA, Mullins RD. Cell mechanics and the cytoskeleton. Nature. 2010;463(7280):485–92. https://doi.org/10.1038/nature08908. Keeling MC, Flores LR, Dodhy AH, Murray ER, Gavara N. Actomyosin and vimentin cytoskeletal networks regulate nuclear shape, mechanics and chromatin organization. Sci Rep. 2017;7(1):5219. https://doi.org/10.1038/s41598-017-05467-x. Pongkitwitoon S, Uzer G, Rubin J, Judex S. Cytoskeletal configuration modulates mechanically induced changes in mesenchymal stem cell osteogenesis, morphology, and stiffness. Sci Rep. 2016;6(1):34791. https://doi.org/10.1038/srep34791. McBeath R, Pirone DM, Nelson CM, Bhadriraju K, Chen CS. Cell shape, cytoskeletal tension, and RhoA regulate stem cell lineage commitment. Dev Cell. 2004;6(4):483–95. https://doi.org/10.1016/S1534-5807(04)00075-9. Biedzinski S, Agsu G, Vianay B, et al. Microtubules control nuclear shape and gene expression during early stages of hematopoietic differentiation. EMBO J. 2020;39(23):e103957. https://doi.org/10.15252/embj.2019103957. Yang Y, Qu R, Fan T, Zhu X, Feng Y, Yang Y, et al. Cross-talk between microtubules and the linker of nucleoskeleton complex plays a critical role in the adipogenesis of human adipose-derived stem cells. Stem Cell Res Ther. 2018;9(1):125. https://doi.org/10.1186/s13287-018-0836-y. Iyer KV, Pulford S, Mogilner A, Shivashankar GV. Mechanical activation of cells induces chromatin remodeling preceding MKL nuclear transport. Biophys J. 2012;103(7):1416–28. https://doi.org/10.1016/j.bpj.2012.08.041. Buszczak M, Inaba M, Yamashita YM. Signaling by cellular protrusions: keeping the conversation private. Trends Cell Biol. 2016;26(7):526–34. https://doi.org/10.1016/j.tcb.2016.03.003. Chaigne A, Labouesse C, White IJ, Agnew M, Hannezo E, Chalut KJ, et al. Abscission couples cell division to embryonic stem cell fate. Dev Cell. 2020;55(2):195–208.e195. https://doi.org/10.1016/j.devcel.2020.09.001. Caliari SR, Burdick JA. A practical guide to hydrogels for cell culture. Nat Methods. 2016;13(5):405–14. https://doi.org/10.1038/nmeth.3839. Soofi SS, Last JA, Liliensiek SJ, Nealey PF, Murphy CJ. The elastic modulus of matrigel as determined by atomic force microscopy. J Struct Biol. 2009;167(3):216–9. https://doi.org/10.1016/j.jsb.2009.05.005. David BG, Fujita H, Yasuda K, Okamoto K, Panina Y, Ichinose J, et al. Linking substrate and nucleus via actin cytoskeleton in pluripotency maintenance of mouse embryonic stem cells. Stem Cell Res. 2019;41:101614. https://doi.org/10.1016/j.scr.2019.101614. Vishavkarma R, Raghavan S, Kuyyamudi C, Majumder A, Dhawan J, Pullarkat PA. Role of actin filaments in correlating nuclear shape and cell spreading. PLoS One. 2014;9(9):e107895. https://doi.org/10.1371/journal.pone.0107895. Versaevel M, Grevesse T, Gabriele S. Spatial coordination between cell and nuclear shape within micropatterned endothelial cells. Nat Commun. 2012;3(1):671. https://doi.org/10.1038/ncomms1668. Uhler C, Shivashankar GV. Regulation of genome organization and gene expression by nuclear mechanotransduction. Nat Rev Mol Cell Biol. 2017;18(12):717–27. https://doi.org/10.1038/nrm.2017.101. Howard J. Mechanics of motor proteins and the cytoskeleton. Sunderland, MA: Sinauer Associates; 2001. Janmey PA, Euteneuer U, Traub P, Schliwa M. Viscoelastic properties of vimentin compared with other filamentous biopolymer networks. J Cell Biol. 1991;113(1):155–60. https://doi.org/10.1083/jcb.113.1.155. Lin YC, Yao NY, Broedersz CP, Herrmann H, Mackintosh FC, Weitz DA. Origins of elasticity in intermediate filament networks. Phys Rev Lett. 2010;104(5):058101. https://doi.org/10.1103/PhysRevLett.104.058101. Costigliola N, Ding L, Burckhardt CJ, Han SJ, Gutierrez E, Mota A, et al. Vimentin fibers orient traction stress. Proc Natl Acad Sci U S A. 2017;114(20):5195–200. https://doi.org/10.1073/pnas.1614610114. Smoler M, Coceano G, Testa I, Bruno L, Levi V. Apparent stiffness of vimentin intermediate filaments in living cells and its relation with other cytoskeletal polymers. Biochim Biophys Acta Mol Cell Res. 1867;2020(8):118726. https://doi.org/10.1016/j.bbamcr.2020.118726. Gan Z, Ding L, Burckhardt CJ, Lowery J, Zaritsky A, Sitterley K, et al. Vimentin intermediate filaments template microtubule networks to enhance persistence in cell polarity and directed migration. Cell systems. 2016;3(3):252–63.e258. https://doi.org/10.1016/j.cels.2016.08.007. Strouhalova K, Prechova M, Gandalovicova A, Brabek J, Gregor M, Rosel D. Vimentin intermediate filaments as potential target for cancer treatment. Cancers (Basel). 2020;12(1):184. https://doi.org/10.3390/cancers12010184. Battaglia RA, Delic S, Herrmann H, Snider NT. Vimentin on the move: new developments in cell migration. F1000Research 2018 vol. 7 F1000 Faculty Rev-1796. https://doi.org/10.12688/f1000research.15967.1. Kalluri R, Weinberg RA. The basics of epithelial-mesenchymal transition. J Clin Invest. 2009;119(6):1420–8. https://doi.org/10.1172/JCI39104. Pattabiraman S, Azad GK, Amen T, Brielle S, Park JE, Sze SK, et al. Vimentin protects differentiating stem cells from stress. Sci Rep. 2020;10(1):19525. https://doi.org/10.1038/s41598-020-76076-4. Kamei H. Relationship of nuclear invaginations to perinuclear rings composed of intermediate filaments in MIA PaCa-2 and some other cells. Cell Struct Funct. 1994;19(3):123–32. https://doi.org/10.1247/csf.19.123. Feliksiak K, Witko T, Solarz D, Guzik M, Rajfur Z. Vimentin association with nuclear grooves in normal MEF 3 T3 cells. Int J Mol Sci. 2020;21(20):7478. Huber F, Boire A, Lopez MP, Koenderink GH. Cytoskeletal crosstalk: when three different personalities team up. Curr Opin Cell Biol. 2015;32:39–47. https://doi.org/10.1016/j.ceb.2014.10.005. Liu CY, Lin HH, Tang MJ, Wang YK. Vimentin contributes to epithelial-mesenchymal transition cancer cell mechanics by mediating cytoskeletal organization and focal adhesion maturation. Oncotarget. 2015;6(18):15966–83. https://doi.org/10.18632/oncotarget.3862. Fan T, Qu R, Jiang X, Yang Y, Sun B, Huang X, et al. Spatial organization and crosstalk of vimentin and actin stress fibers regulate the osteogenic differentiation of human adipose-derived stem cells. FASEB J: Off Publ Fed Am Soc Exp Biol. 2020;35(2):e21175. https://doi.org/10.1096/fj.202000378RR. Jensen C, Teng Y. Is it time to start transitioning from 2D to 3D cell culture. Front Mol Biosci. 2020;7:33. https://doi.org/10.3389/fmolb.2020.00033. Pineda ET, Nerem RM, Ahsan T. Differentiation patterns of embryonic stem cells in two- versus three-dimensional culture. Cells Tissues Organs. 2013;197(5):399–410. https://doi.org/10.1159/000346166. Beauchamp P, Jackson CB, Ozhathil LC, Agarkova I, Galindo CL, Sawyer DB, et al. 3D co-culture of hiPSC-derived cardiomyocytes with cardiac fibroblasts improves tissue-like features of cardiac spheroids. Front Mol Biosci. 2020;7:14. https://doi.org/10.3389/fmolb.2020.00014. Pontes Soares C, Midlej V, de Oliveira ME, Benchimol M, Costa ML, Mermelstein C. 2D and 3D-organized cardiac cells shows differences in cellular morphology, adhesion junctions, presence of myofibrils and protein expression. PLoS One. 2012;7(5):e38147. https://doi.org/10.1371/journal.pone.0038147. Clark NM, Hinde E, Winter CM, Fisher AP, Crosti G, Blilou I, et al. Tracking transcription factor mobility and interaction in arabidopsis roots with fluorescence correlation spectroscopy. eLife. 2016;5:e14770. https://doi.org/10.7554/eLife.14770. Dmello C, Sawant S, Alam H, Gangadaran P, Mogre S, Tiwari R, et al. Vimentin regulates differentiation switch via modulation of keratin 14 levels and their expression together correlates with poor prognosis in oral cancer patients. PLoS One. 2017;12(2):e0172559. https://doi.org/10.1371/journal.pone.0172559. Capetanaki Y, Smith S, Heath JP. Overexpression of the vimentin gene in transgenic mice inhibits normal lens cell differentiation. J Cell Biol. 1989;109(4 Pt 1):1653–64. https://doi.org/10.1083/jcb.109.4.1653. Li B, Zheng YW, Sano Y, Taniguchi H. Evidence for mesenchymal-epithelial transition associated with mouse hepatic stem cell differentiation. PLoS One. 2011;6(2):e17092. https://doi.org/10.1371/journal.pone.0017092. Lian N, Wang W, Li L, Elefteriou F, Yang X. Vimentin inhibits ATF4-mediated osteocalcin transcription and osteoblast differentiation. J Biol Chem. 2009;284(44):30518–25. https://doi.org/10.1074/jbc.M109.052373. Sommers CL, Byers SW, Thompson EW, Torri JA, Gelmann EP. Differentiation state and invasiveness of human breast cancer cell lines. Breast Cancer Res Treat. 1994;31(2-3):325–35. https://doi.org/10.1007/BF00666165. Trakarnsanga K, Ferguson D, Daniels DE, Griffiths RE, Wilson MC, Mordue KE, et al. Vimentin expression is retained in erythroid cells differentiated from human iPSC and ESC and indicates dysregulation in these cells early in differentiation. Stem Cell Res Ther. 2019;10(1):130. https://doi.org/10.1186/s13287-019-1231-z. Murray P, Prewitz M, Hopp I, Wells N, Zhang H, Cooper A, et al. The self-renewal of mouse embryonic stem cells is regulated by cell-substratum adhesion and cell spreading. Int J Biochem Cell Biol. 2013;45(11):2698–705. https://doi.org/10.1016/j.biocel.2013.07.001. Ying QL, Wray J, Nichols J, Batlle-Morera L, Doble B, Woodgett J, et al. The ground state of embryonic stem cell self-renewal. Nature. 2008;453(7194):519–23. https://doi.org/10.1038/nature06968. Stepanova T, Slemmer J, Hoogenraad CC, Lansbergen G, Dortland B, De Zeeuw CI, et al. Visualization of microtubule growth in cultured neurons via the use of EB3-GFP (end-binding protein 3-green fluorescent protein). J Neurosci. 2003;23(7):2655–64. Kita-Matsuo H, Barcova M, Prigozhina N, Salomonis N, Wei K, Jacot JG, et al. Lentiviral vectors and protocols for creation of stable hESC lines for fluorescent tracking and drug resistance selection of cardiomyocytes. PLoS One. 2009;4(4):e5046. https://doi.org/10.1371/journal.pone.0005046. Westphal M, Jungbluth A, Heidecker M, Mühlbauer B, Heizer C, Schwartz J-M, et al. Microfilament dynamics during cell movement and chemotaxis monitored using a GFP–actin fusion protein. Curr Biol. 1997;7(3):176–83. https://doi.org/10.1016/S0960-9822(97)70088-5. Tinevez JY, Perry N, Schindelin J, Hoopes GM, Reynolds GD, Laplantine E, et al. TrackMate: An open and extensible platform for single-particle tracking. Methods. 2017;115:80–90. https://doi.org/10.1016/j.ymeth.2016.09.016. de Chaumont F, Dallongeville S, Chenouard N, Herve N, Pop S, Provoost T, et al. Icy: an open bioimage informatics platform for extended reproducible research. Nat Methods. 2012;9(7):690–6. https://doi.org/10.1038/nmeth.2075. Wells CA, Mosbergen R, Korn O, Choi J, Seidenman N, Matigian NA, et al. Stemformatics: visualisation and sharing of stem cell gene expression. Stem Cell Res. 2013;10(3):387–95. https://doi.org/10.1016/j.scr.2012.12.003. De Rossi MC, De Rossi ME, Sued M, Rodríguez D, Bruno L, Levi V. Asymmetries in kinesin-2 and cytoplasmic dynein contributions to melanosome transport. FEBS Lett. 2015;589(19, Part B):2763–8. Wasserman L. A concise course in statistical inference. New York: Springer - VErlag; 2010. Romero JJ, De Rossi MC, Oses C, Vázquez Echegaray C, Verneri P, Francia M, et al. Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells. Microtubules dynamics - ES cells. In: figshare; 2021. https://doi.org/10.6084/m9.figshare.13952498.v2. Romero JJ, De Rossi MC, Oses C, Vázquez Echegaray C, Verneri P, Francia M, et al. Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells. Microtubules - ES cells. In: figshare; 2021. https://doi.org/10.6084/m9.figshare.13948010.v3. Romero JJ, De Rossi MC, Oses C, Vázquez Echegaray C, Verneri P, Francia M, et al. Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells. Actin filaments - ES cells. In: figshare; 2021. https://doi.org/10.6084/m9.figshare.13928732.v2. Romero JJ, De Rossi MC, Oses C, Vázquez Echegaray C, Verneri P, Francia M, et al. Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells. Vimentin filaments - ES cells. In: figshare; 2021. https://doi.org/10.6084/m9.figshare.13925684.v2. The work was supported by ANPCyT (PICT 2015-0370, PICT 2016-0828 and PICT-2018-1921 to V.L.), Universidad de Buenos Aires (UBACyT 20020150100122BA to V.L.), and CONICET (PIP 2014-11220130100121CO to V. L.). Juan José Romero, María Cecilia De Rossi and Camila Oses contributed equally to this work. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales (IQUIBICEN), CONICET-Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, C1428EGA, Buenos Aires, Argentina Juan José Romero, María Cecilia De Rossi, Camila Oses, Camila Vázquez Echegaray, Paula Verneri, Marcos Francia, Alejandra Guberman & Valeria Levi Departamento de Fisiología, Biología Molecular y Celular, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, C1428EGA, Buenos Aires, Argentina Alejandra Guberman Departamento de Química Biológica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, C1428EGA, Buenos Aires, Argentina Valeria Levi Juan José Romero María Cecilia De Rossi Camila Oses Camila Vázquez Echegaray Paula Verneri Marcos Francia Conceptualization: V.L. and A.G.; methodology: V.L. and A.G.; formal analysis: J.J.R, M.C.D.R., C.O., and V.L.; investigation: J.J.R, M.C.D.R., C.O., C.V.E., P.V., and M.F.; writing—original draft: J.J.R., M.C.D.R., A.G., and V.L.; writing—review and editing: J.J.R., M.C.D.R., C.O., C.V.E., P.V., M.F., A.G., and V.L.; visualization: J.J.R., M.C.D.R., C.O., C.V.E., M.F., A.G., and V.L.; supervision: A.G. and V.L.; project administration: V.L. All authors read and approved the final manuscript. Correspondence to Alejandra Guberman or Valeria Levi. Supplementary Video S1. 3D organization of the microtubule network. Representative 3D confocal images of ES cells transfected with EMTB-3xGFP (green) and H2B-mCherry (red) (Scale bar: 5 μm). Related to Fig. 1a, top panel. Supplementary Video S2. 3D organization of the microtubule network. Representative 3D confocal images of ES cells transfected with EMTB-3xGFP (green) and H2B-mCherry (red) (Scale bar: 5 μm). Related to Fig. 1a, bottom panel. Supplementary Fig. S1. ES cells exhibit long membrane protrusions. (left) Representative confocal image of ES cells expressing YPet-OCT4 (green) and mem-mCherry (red) collected at a single plane of the z-stack (Scale bar: 10 μm). (middle) 3D reconstruction of the images showing a protrusion extending from one cell to a neighboring cell. (right) Zoom-in image of the same cell, the protrusion is indicated with an asterisk; the red image was digitally saturated to facilitate the visualization of the protrusion. Supplementary Video S3. EB3-GFP comets point to every direction. ES cells transfected with EB3-GFP and H2B-mCherry were imaged at 0.6 frames/s (100 frames) to observe the dynamics of EB3-GFP comets. Related to Fig. 1d. Supplementary Video S4. EB3-GFP comets irradiate from a specific site in the cytoplasm. ES cells transfected with EB3-GFP and H2B-mCherry were imaged at 0.6 frames/s (100 frames) to capture the dynamical behavior of EB3-GFP comets. Related to Fig. 1d. Supplementary Video S5. EB3-GFP comets in close contact with the cell nucleus. ES cells transfected with EB3-GFP and H2B-mCherry were imaged at 0.6 frames/s (100 frames). Related to Fig. 1c. Supplementary Table S1. Meta-analysis of microarray, RNA-seq and proteomic datasets analyzed in this work. Supplementary Fig. S2. Omics data analysis of vimentin expression in mouse embryo and different cell types. Data analysis of vimentin expression from microarray, RNA-seq and proteomics (as indicated in each panel) performed in Stemformatics data-mining platform. Bars represent mean ± SEM when corresponding. Full meta-data of analyzed datasets is available at Additional file 7: Supplementary Table S1. A The left panel shows the comparison between embryonic stem (ES) and epiblast-derived stem (EpiS) cells from different development stages of mouse embryo: Cavity (CAV, E5.5 – E6.0); Pre-primitive streak (PS; E6.0 – E6.5); Late Mid Streak (LMS; E6.75 – E 7.25); Late Streak (LS; E7.25 – E7.5); Early Bud (EB; E7.75) and Late Bud (LB; E8.0). The right panel shows data from different tissues during advanced embryo development. B Data from different stem cell types: ES cells, mesenchymal stem (MS) cells and multipotent adult progenitor (MAP) cells. C Data from ES cells differentiation experiments. The left panel shows vimentin mRNA levels from ES cells and neural progenitor (NP) cells. The right panel shows data from ES cells and ES cells-derived mesoderm cells, cardiac progenitors and cardiomyocytes. D Data obtained from ES cells during their differentiation to epiblast-like stem (EpiLS) and primordial germ cell-like (PGCL) cells. E Data from ES cells, mouse embryonic fibroblasts (MEF) and induced pluripotent stem (iPS) cells. F RNA-seq (left panel) and proteomic (right panel) data of ES cells, and MEF during their reprogramming to iPS cells. Supplementary Video S6. 3D organization of the vimentin network. Representative 3D confocal images of ES cells transfected with GFP-vimentin (green) and H2B-mCherry (red). (Scale bar: 5 μm). Related to Fig. 3b, top panel. Supplementary Video S7.3D organization of the vimentin network. Representative 3D confocal images of ES cells transfected with GFP-vimentin (green) and H2B-mCherry (red). (Scale bar: 5 μm). Related to Fig. 3b, bottom panel. Supplementary Fig. S3. Morphology of the ES cells colonies after different treatments that disturb the cytoskeleton. Representative transmission (top panels) and fluorescence (bottom panels) images of colonies of YPet-OCT4 ES cells and W4 ES cells, (green: nuclei). YPet-OCT4 cells were registered in control condition and treated with latrunculin-B, nocodazole, taxol or vinblastine, whereas W4 ES cells were transfected with GFP-vim(1–138). The last image was saturated digitally to facilitate the visualization of out-of-focus cells (Scale bars: 10 μm). The bottom panel shows a 3D section of the nocodazole-treated colony to exhibit its detachment from the substrate. Supplementary Fig. S4. OCT4-chromatin interactions are not affected by the microtubules network. Single-point FCS measurements were run in YPet-OCT4 ES cells. A Mean, normalized ACF obtained at the nucleoplasm of control (gray) and vinblastine-treated (violet) cells. B,C The ACF data were fitted with Eq. 1 to obtain the fractions of free (diffusion), long-lived bound and short-lived bound TF (B) and the characteristic times of long-lived and short-lived interactions of the TF with chromatin (C). These experiments were run using a higher laser power that could explain the slightly different characteristic times from those showed in Fig. 5. The data is presented as mean ± SE for each experimental condition (control: gray bar, n=16, vinblastine: violet bar, n=16). Supplementary Fig. S5. Comparison of raw z-stack images before and after nuclei segmentation. Supplementary Fig. S6. Representative 3D images of ES cells expressing EMTB-3xGFP (green) and H2B-mCherry (red). Related to Fig. 1a. Supplementary Video S8. Representative time-lapse images of EB3-GFP comets. ES cells transfected with EB3-GFP and H2B-mCherry were imaged at 0.6 frames/s (100 frames) to observe the dynamics of EB3-GFP comets. Related to Fig. 1c and d. Supplementary Fig. S7. Quantification of EGFP-actin fluorescence intensity along membrane blebs. Related to Fig. 2c and d. Supplementary Table S2. Related to Fig. 4. Raw data. Romero, J.J., De Rossi, M.C., Oses, C. et al. Nucleus-cytoskeleton communication impacts on OCT4-chromatin interactions in embryonic stem cells. BMC Biol 20, 6 (2022). https://doi.org/10.1186/s12915-021-01207-w DOI: https://doi.org/10.1186/s12915-021-01207-w Nuclear morphology Transcription factors dynamics
CommonCrawl
MITRE: inferring features from microbiota time-series data linked to host status Elijah Bogart1,2, Richard Creswell1 & Georg K. Gerber ORCID: orcid.org/0000-0002-9149-55091 Longitudinal studies are crucial for discovering causal relationships between the microbiome and human disease. We present MITRE, the Microbiome Interpretable Temporal Rule Engine, a supervised machine learning method for microbiome time-series analysis that infers human-interpretable rules linking changes in abundance of clades of microbes over time windows to binary descriptions of host status, such as the presence/absence of disease. We validate MITRE's performance on semi-synthetic data and five real datasets. MITRE performs on par or outperforms conventional difficult-to-interpret machine learning approaches, providing a powerful new tool enabling the discovery of biologically interpretable relationships between microbiome and human host (https://github.com/gerberlab/mitre/). The human microbiome is highly dynamic on multiple timescales, changing dramatically during the development of the gut in childhood, with diet, or due to medical interventions [1]. Recently, a number of longitudinal studies have been undertaken, seeking to link the changes in the microbiota over time with medical interventions such as delivery by Cesarean section [2], dietary changes [3], or antibiotic treatment [4], or with disease outcomes in the host such as type 1 diabetes [5], dietary allergies [6], premature delivery [7, 8], necrotizing enterocolitis [9, 10], and infection [11, 12]. Deriving maximally useful information from these studies requires computational methods that can simultaneously identify patterns of change in the microbiome and link these patterns to the host's status (e.g., disease outcome, presence or absence of an intervention). Moreover, such computational methods must contend with numerous challenges inherent to microbiome time-series data, including measurement noise, sparse and irregular temporal sampling, and inter-subject variability. To overcome the challenges inherent in linking longitudinal microbiome data to host status, we developed MITRE, a computational model that infers human-interpretable predictive rules from high-throughput microbiome time-series data, implemented in an open-source software package (https://github.com/gerberlab/mitre/) [13]. MITRE falls into the general category of Bayesian supervised machine learning classifiers and predictive modeling: the algorithm uses a training dataset of microbiota time series and binary descriptions of host statuses (supervised learning) to learn a probability distribution (Bayesian inference) over a set of alternative models that predict the status of a host given only input microbiome data and optional covariates (classification). Bayesian approaches are powerful, because they provide principled estimates of uncertainty throughout the model, which is an especially important feature in biomedical applications where noisy inputs are the norm. We note that another rule-based method, association rule mining (ARM), has recently been applied to analyzing microbiome data in a different context (finding interaction patterns among OTUs) [14]. Although ARM has some commonalities with Bayesian rule learning approaches, ARM methods tend to employ user-based cutoffs and heuristics, rather than principled probabilistic methods, as their primary function is to mine large databases for putative interactions, rather than build predictive models. Further, unlike Bayesian models, ARM methods do not incorporate prior knowledge, as their focus is mining from large databases. In previous work, we presented the MDSINE [15] algorithm, which infers dynamical systems models from microbiome time-series data in order to forecast the population dynamics of the microbiome over time. Our present work, MITRE, addresses a different question: can we predict or infer the status of the host given microbiome time-series data. From the machine learning perspective, MDSINE is an unsupervised model, whereas MITRE is a supervised model. The key distinction is that MDSINE models microbiome data, whereas MITRE instead models host outcomes. Like other supervised models, MITRE focuses on finding only the essential features (in this case, microbial clades and relevant time windows) to explain the outcome, rather than attempting to explain the microbiome data itself. This architecture is ideal for highly heterogeneous datasets with many "distractors," which are the reality for longitudinal studies of the human microbiome. Supervised machine learning classifiers are employed in many biomedical predictive modeling applications, including forecasting (predicting a future outcome, such as the onset of disease, based on past data) and diagnosis or subtyping (predicting which category a subject belongs to based on all available data). MITRE's unique contributions are its modeling of the special properties of microbiome time-series data (phylogenetic and temporal relationships) and its emphasis on producing human-interpretable predictors. This latter capability is in contrast to various generic "black box" machine learning methods that have been applied to analyzing static microbiome data, such as random forests [6, 16,17,18], which may achieve high predictive accuracy but do not yield easily human-interpretable models; interpretability is especially challenging for such models in the context of time-series analyses, given repeated measurements and the fact that relevant dynamics may occur at multiple timescales. In the following sections, we introduce the MITRE framework, then provide benchmarking results of MITRE versus comparator methods on semi-synthetic data and five real microbiome time-series datasets, and finally illustrate examples of MITRE's exploratory data analysis capabilities and how these can help extract biological insights. Conceptual overview of the MITRE model and software Figure 1 provides an overview of the MITRE framework. MITRE takes as input the following: (1) tables of microbial abundances, typically operational taxonomic units (OTUs) from 16S rRNA amplicon sequencing or species mappings from metagenomic data, measured over time for each host; (2) a binary (two-valued) description of the status of each host (e.g., phenotype A or phenotype B); (3) an optional set of static covariates for each host (e.g., gender); and (4) placements of the microbes on a reference phylogenic tree [19]. MITRE learns human-interpretable rule-based models linking features of microbiota time-series data to host status. Rules operate on automatically learned time periods and groups of phylogenetically related microbes. a Schematic of the MITRE analysis pipeline, resulting in a single best predictive model as well as a distribution over alternative models that can be interactively explored. b Schematic of example rule in a applied to hypothetical data. Here, two subjects satisfy both the condition on the average abundance of microbe group A and the rate of change of abundance of group B Because MITRE seeks to learn patterns of change over time in the microbiome that link to host status, it is necessary to provide MITRE with data with sufficient temporal sampling. At a minimum, MITRE requires 3 time points, although we recommend at least 6 time points and preferably at least 12 time points based on experiments with semi-synthetic data detailed in the subsequent section; the subsequent analysis also provides information about the performance of MITRE with differing numbers of subjects in a study. Although highly irregularly sampled time series (i.e., regions of dense sampling followed by widely separated time points) can in principle be used as input to MITRE, such data cannot be fully exploited by MITRE because the algorithm seeks to find contiguous stretches of time (windows) to assess the temporal changes. Thus, if using non-uniformly sampled data, we recommend at least 3 consecutive proximate time points in each non-uniformly sampled region. Beyond these basic guidelines, multiple factors must be considered for appropriate experimental design of longitudinal studies, including the timescale of the relevant biological processes under study (e.g., rapid changes in the microbiome due to diet versus prolonged recovery from antibiotic exposure), see for instance [20, 21] for further discussion on this important topic. MITRE automatically learns from the provided data predictive models that can be expressed as a set of conditional statements, or human-readable rules, about time-localized patterns of change in the abundances of groups of phylogenetically related organisms. Weighted sums of the truth values of the rules are used to predict the status of each host. The MITRE software package also provides a graphical user interface (GUI) for interactive visualization of the output, which summarizes the predictive models learned from the data. To be more precise, a MITRE model consists of a baseline probability of a default host status plus a set of zero or more rules. Each rule is a conjunction of one or more detectors—conditional statements about bacterial abundances in the form "between times t0 and t1, the average abundance of bacterial group j is [above/below] threshold θl" or "between times t0 and t1, the slope of the abundance of bacterial group j is [above/below] threshold θl"—together with a multiplicative effect on the odds of the outcome of interest if all the detectors are satisfied. As a simple example, a MITRE model predicting the odds of an infant developing a disease in the first year of life might be: If, from month 2 to month 5, the average relative abundance of bacterial clade A is above 4.0%, and from month 5 to month 8, the relative abundance of bacterial clade B increases by at least 1.0% per month, the odds of disease increase by a factor of 10. If, from month 3 to month 10, the average relative abundance of OTU C is less than 9.5%, the odds of disease decrease by a factor of 2. The baseline probability of disease is 22.0%. Figure 1b schematically illustrates the application of a rule set to hypothetical data. To predict the probability that an individual will develop the disease, the effects of each rule satisfied by that individual's microbiome data are combined with the baseline disease probability. A comprehensive pool of possible detectors is generated automatically at the beginning of a MITRE analysis, including the detectors that apply to average values and rates of change of clades at all levels on the phylogenetic tree of observed bacteria at as many time windows as the temporal resolution of the data will allow (see the "Methods" section.) By combining the detectors from this pool, rules in a MITRE model can capture rich temporal patterns, but still remain human-interpretable because each component rule is easy to understand. MITRE is a nonlinear model, which has a number of advantages over a linear model. Most obviously, nonlinear models can capture effects such as thresholds/saturation or interactions between variables. A more subtle issue, particularly relevant to microbiome ratio-based data, is that linear models introduce mathematical/statistical difficulties when analyzing compositional or proportional data. While special constraints are needed for linear models to overcome these difficulties [22], nonlinear models do not suffer from the same limitations because they can inherently learn nonlinear transformations of the data. It is particularly straightforward to understand the transformations produced by the MITRE detectors described above: the learned thresholds effectively discretize ratio data into distinct levels, a transformation that renders the data mathematically non-compositional. The MITRE framework is fully Bayesian, meaning it learns a distribution over models, called the posterior probability distribution, which takes into account both the input data and prior information provided. In the case of MITRE, this prior information favors by default parsimonious explanations, i.e., short sets of simple rules. Importantly, the default prior is designed to favor the empty rule set, or the case in which the baseline odds only are used to predict host status, and the microbiome data plays no role. This feature of the default prior is designed to guard against over-fitting. Moreover, through the formalism of Bayes factors (see the "Methods" section), MITRE provides a quantitative measure for the evidence favoring no association between the microbiome data and host status or the alternative of any rule sets that predict such a relationship; this feature allows the user to rigorously evaluate whether sufficient signal is present in the microbiome data to predict host status. Of note, this measure in effect incorporates multiple "hypotheses" simultaneously (as the inference procedure explores the entire space of possible rule sets at once.) Additional Bayes factors allow the user to assess the evidence that each particular bacterial clade or OTU is associated with the host status, by comparing the evidence for a model in which no detector in the rule set applies to the clade of interest to a model in which at least one detector does apply to the clade. The MITRE software approximately infers the posterior probability distribution using a custom Markov chain Monte Carlo algorithm and reports a point estimate of the single best rule set as well as a summary of the distribution, which the user may investigate interactively with the provided GUI. To make predictions for new data, either the point estimate or an ensemble of multiple rule sets, weighted according to their posterior probabilities, may be used. Benchmarking against standard machine learning methods: semi-synthetic data We used semi-synthetic data to compare the cross-validated predictive performance of MITRE to two popular standard machine learning methods, random forests, and L1-regularized logistic regression, which have been widely used to analyze data from static microbiome studies. We tested the performance of MITRE and the comparator methods using cross-fold validation. In brief, a subset of the data, including both microbiome time-series measurements and host status labels, was used to train the models, and then model performance was tested by predicting the host status labels on the unseen data using only the microbiome time-series measurements as inputs; this process was repeated to cycle through the complete dataset. For each method, the performance was evaluated using the F1-score under cross-validation, converting modeled probabilities of outcomes to binary predictions by applying a threshold at probability 0.5. The F1-score is a widely used metric to assess the performance of machine learning binary classifier algorithms, which averages positive predictive value and sensitivity metrics, thus providing a useful summary that balances the assessment of the classifier algorithm's rate of accurate positive prediction relative to all positive predictions made or relative to all positives in the dataset. We simulated data from a real dataset using a parametric bootstrapping-type procedure, in which models of microbiome dynamics were employed to interpolate the real data and inject temporal perturbations into microbial clades to simulate a "disease" host phenotype (see the "Methods" section for details). The real dataset [2] we bootstrapped from tracked the gut microbiome composition from birth to 2 years of age in a cohort of US infants; we chose this dataset because it was among the densest and most regularly sampled of available time-series datasets, and also studied a relatively large number of subjects. To gain insight into the predictive performance of the different methods, we simulated data with varying numbers of subjects or time points, and one or two temporal perturbations to microbial clades to simulate subjects with a "disease." In other words, for a single-clade perturbation, we assumed for subjects with a "disease" a systematic change over time in abundances of a single clade of microbes, and for a two-clade perturbation, we assumed systematic changes over time in abundances of two separate clades of microbes; these scenarios correspond to the results observed in real datasets as described below. We assumed the perturbations occur over a limited but unknown time period during the study (~ 20% of the study duration), which represents a challenging but biologically relevant scenario for analysis. The ranges of subjects and time points simulated correspond approximately to those in real studies and thus provide insight into the performance of the algorithms on realistically sized studies. Note that MITRE is a supervised machine learning method, which does not directly model the covariates (microbiome data.) Thus, our data simulation procedure is unrelated to the underlying MITRE model and not expected to introduce bias in favor of our method. To provide as reasonable a comparison as possible, we used the average abundance of each OTU in a series of time windows as input to the comparator algorithms. It is important to note that there are no prior methods available that were specifically designed for supervised learning from microbiome time-series data. In fact, the comparator algorithms we implemented themselves represent an advance over the state of the art for many studies. To date, most studies have employed an ad hoc strategy of manually identifying time windows of interest within the experiment and then testing for a differential abundance of pre-specified groups of taxa in each time window separately. Such an approach has significant limitations. For example, effects occurring outside the defined windows or across their boundaries may not be detected, and analyzing each time window/taxon pair independently significantly reduces statistical power and precludes the discovery of interactions across taxa or sequences of events across multiple time windows. Our results on semi-synthetic data over a range of scenarios (Fig. 2a–d) demonstrate superior cross-validated predictive performance of MITRE compared to the other methods. Several interesting trends are also evident from these results. First, the MITRE ensemble (use of multiple rule sets weighted according to their posterior probabilities) and point (estimate of single best rule set) methods have similar performance, except for in the setting of low numbers of subjects, which is likely simply a stochastic effect since all the methods in that setting have poor performance. The similar performance of the MITRE point and ensemble methods is very encouraging from the interpretability perspective, since the point method yields a single, human-interpretable rule set. Second, as expected, all of the methods improve in performance with increasing numbers of subjects, with eventual plateauing of gains in performance at a level ultimately limited by noise in the data. Third, there is also an improvement in performance with increasing numbers of time points, but this improvement is less impressive. This phenomenon can be partially explained by our assumption in generating the semi-synthetic data that perturbations corresponding to the "disease" phenotype occur over a limited time period during the study. Thus, sampling of more time points outside the perturbation period provides only limited additional information useful for prediction. Fourth and finally, we also see generally worse performance of all methods in the more complex setting of two perturbations in the "disease" cases, particularly with limited numbers of subjects or time points. Interestingly, random forests outperform L1-regularized logistic regression in the two-perturbation case in the setting of low numbers of subjects or time points, while the opposite is true in the one-perturbation case, which may be due to random forest's capacity to handle nonlinearities. In any event, MITRE, which models nonlinearities through conjunctions in rules, consistently outperforms the other two methods in this setting as well. Overall, our results demonstrate that MITRE, a method specifically tailored for analyzing microbiome time-series data, outperforms generic machine learning methods. Moreover, we provide a simulation and testing platform for users to investigate the questions relevant to particular microbiome time-series datasets in the future. Cross-validated predictive performance of MITRE and comparator methods on semi-synthetic and real data. a–d Results on semi-synthetic data. A parametric bootstrapping-type method was used to generate simulated data from an underlying real dataset. Simulated cases were generated by randomly selecting and perturbing bacterial clades over a randomly selected limited time window (~ 20% of the duration of the study); an equal number of control subjects were simulated. For the one-clade perturbation scenarios, the clade remained unperturbed for the simulated cases; for the two-clade perturbation scenarios, one clade was perturbed in the simulated control subjects, and both were perturbed in the simulated cases. a, b One or two clades randomly perturbed in simulated subjects, 18 time points, varying numbers of subjects. c, d One or two clades randomly perturbed in simulated subjects, 32 subjects, varying numbers of time points. e Results on real data. The different methods were used to predict the indicated categories in the datasets shown. F1-score is the harmonic mean of precision and recall; higher scores indicate superior results Benchmarking against standard machine learning methods: real data We next evaluated the performance of MITRE on real experimental datasets with 16S rRNA amplicon and whole-genome shotgun metagenomic sequencing data, from five representative published studies with relatively dense temporal sampling and numbers of subjects. Vatanen et al. [6] tracked the gut microbiome composition and life history data, including allergy diagnoses and serum IgE levels, from birth to 3 years of age in cohorts of infants at high risk for autoimmune disease in Finland, Estonia, and Russia. David et al. [3] tracked the gut microbiome composition of healthy adults before, during, and after a 5-day period of consuming exclusively plant-based or exclusively animal-based diets. Bokulich et al. [2] tracked the gut microbiome composition from birth to 2 years of age in a cohort of US infants and examined the effects of mode of delivery, diet, and antibiotic exposure. Kostic et al. [5] tracked the gut microbiome of Finnish and Estonian infants at high risk for type 1 diabetes throughout the first years of life, examining the microbiome correlates of disease development. Finally, DiGiulio et al. [7] tracked the composition of the vaginal microbiome in a cohort of pregnant women, investigating an association with premature delivery. Using the microbiome and outcome/class data from these five studies, we defined a total of 11 representative microbiome-based prediction or classification tasks (e.g., given the vaginal microbiome data of DiGiulio et al., predict which women in the cohort experienced premature delivery; given the gut microbiome data of David et al., determine which time series correspond to exclusively animal-based diets versus exclusively plant-based diets). A full list of the tasks analyzed is given in Additional file 1: Table S1. In 5 out of the 11 tasks, we found that at least 1 of the methods performed well (F1-score > 0.7, indicating reasonably high precision and recall). These tasks represent scenarios in which true biological signal may be present in the data and thus serve as the most meaningful basis for comparing the performance of the different methods. Detailed results for all methods on all tasks are given in Additional file 2: Table S2. Both the MITRE point estimate and ensemble methods achieved high accuracy on all five of the relevant prediction or classification tasks (Fig. 2e). In the case of distinguishing infants fed formula-based diets from those predominantly breast-fed (Bokulich et al.), predicting seroconversion to serum autoantibody positivity in infants at high risk of T1D (Kostic et al.), and predicting premature delivery (DiGiulio et al.), MITRE significantly outperformed the random forest and L1-regularized logistic regression approaches; for Russian cohort membership (Vatanen et al.) or plant-based diet (David et al.) prediction, MITRE performed on par with the best comparator method (random forest). These results are consistent with our semi-synthetic data simulations as well. Collectively, our results suggest that MITRE's phylogenetic aggregation approach and robust use of the temporal structure of the data provide significant advantages for classification, and the increased interpretability of the MITRE point estimate comes at little if any cost in predictive accuracy. Model interpretability and exploratory analysis capabilities We illustrate here an example demonstrating MITRE's ability to achieve high accuracy while maintaining interpretability. The best (point estimate) rule set learned by MITRE to distinguish between predominantly formula-fed and predominantly breast-fed infants in the study of Bokulich et al. [2] is "If, between the 1st day of life and the 156th day of life, the average abundance of clade 13241 increases faster than 0.03% per day, the probability that the subject was predominantly formula-fed is 79%; otherwise, that probability is 5.4%." Though this MITRE rule set is simple enough to express in a single sentence, it outperforms the random forest models that aggregate the predictions of over a thousand decision trees (Fig. 2e). Moreover, this MITRE rule set lends itself readily to biological interpretation. Clade 13241 is a broad group of Firmicutes, including OTUs in the dataset mapping to Ruminococcus gnavus, Roseburia hominis, and several Clostridium and Blautia species. These species are generally viewed as more representative of adult or at least more mature microbiomes, being strict anaerobes with specialized carbon source utilization requirements and capabilities (e.g., [23]), suggesting that the formula diet may shift infants toward more adult-like gut microbiota. Of note, the expressiveness and interpretability of the MITRE rule set format is retained even in cases of nonlinear interactions across multiple clades and time windows, see Additional file 3: Supplementary Note for an example. In addition to providing the point estimate, which can serve as a powerful predictive model as described, MITRE also allows the user to explore the distribution of probable rules learned by the framework. Such explorations can be useful for further interpreting rule sets and generating biological hypotheses. Figure 3 illustrates MITRE's capabilities for interactive visualization of the distribution of learned rules. Heat maps as shown in Fig. 3a and d allow the user to examine the time windows and regions of the phylogenetic tree where the temporal changes in the microbiota are most strongly associated with the outcome of interest. In Fig. 3a–c, MITRE has been used to learn the rules that distinguish the microbial dynamics observed when the subjects in the study of David et al. [3] were fed an exclusively plant-based diet for 5 days versus the dynamics observed when subjects were fed an exclusively animal-derived diet. The user has clicked on two areas on the heat map, revealing rules that apply to different time windows and different groups of OTUs in the order Clostridiales. The first rule set pertains to a clade containing Roseburia species, which are butyrate producers, whereas the second rule set pertains to a clade containing Dorea species, which produce other short-chain fatty acids including acetate and formate, but not butyrate. Thus, this capability to explore the distribution of rule sets allows the user to find evidence that the animal-based diet promotes two groups of phylogenetically distinct, and likely functionally distinct, groups of microbes. MITRE supports exploratory analyses through an interactive visualization interface. The interface allows the user to explore the distribution of learned rules. MITRE was applied to predict diet type from data from David et al. [3] (a–c) or Bokulich et al. [2] (d, e). In a and d, cell colors indicate the strength of evidence that the dynamics of an OTU, or one of its ancestors, during a time window is associated with diet. b, c, e High-probability detectors and phylogenetic subtrees to which they apply. b, c Analyses reveals dynamic behaviors of two different clades, one with butyrate producers and the other without, which distinguish subjects on plant- or animal-based diets. The animal-based diet thus promotes two groups of phylogenetically distinct microbes which are also likely functionally distinct. e Analyses reveal dynamic behavior of a clade of bacteria, associated with a more mature microbiome, which is increased in the predominantly formula-fed infants, suggesting the formula diet may shift infants toward more adult-like gut microbiota. Red lines, threshold slopes/abundances; black lines, median slopes/abundances. Median effect = median over all rules containing the detector As another example, Fig. 3d and e demonstrate the exploration of the posterior distribution of rule sets identifying the temporal patterns that distinguish the microbiota of predominantly formula-fed and predominantly breast-fed infants in the study of Bokulich et al. [2], for which the point estimate described above performed well. The heat map of Fig. 3d shows that the high-posterior-probability rule sets are strongly focused on a single group of OTUs in the first 156 days of life. Figure 3e presents a particular detector, included in many such rule sets, that discriminates effectively between diet types in the training data (which is the detector used by MITRE to form the point estimate rule discussed above). Thus, in this case, the user finds evidence that the posterior distribution is essentially unimodal, with the point estimate alone characterizing the temporal differences between the formula and breast milk-fed infants well. MITRE offers a number of advantages over generic statistical or machine learning methods. Incorporation of phylogenetic information readily allows for biological interpretation of results, as discussed, whereas standard classification methods that evaluate each taxon independently clearly do not have this advantage. MITRE automatically learns time windows that are relevant to predicting host status, as opposed to generic approaches that require the user to manually specify the periods of interest, which are generally unknown a priori. We have also highlighted the utility of MITRE's human-readable rules. These rules can capture rich temporal patterns and nonlinear relationships among microbes, but remain interpretable, as they are composed of simple and understandable detectors. Indeed, as we have shown, MITRE rule sets are not only easy to understand, but a single MITRE rule can outperform black box machine learning methods that make predictions based on collections of hundreds to thousands of components, which are difficult to understand even individually. Another important feature of the MITRE framework is that it is fully Bayesian. Bayesian models are increasingly being adopted in a variety of fields, in particular, for biomedical applications (e.g., [24]), because they provide a unified framework that handles a number of key modeling and inferential issues, including incorporation of prior knowledge, accurate estimation of confidence in predictions, and principled comparisons of multiple models. Bayesian methods for model comparison have recently been highlighted as powerful alternatives [25] to traditional p value-based hypothesis testing [26], because Bayesian approaches allow direct comparison of multiple relevant alternative models, rather than just the ability to reject a null model that is often of little interest in itself. In particular, MITRE facilitates principled model comparisons by calculating Bayes factors [27], which quantify the strength of the evidence provided by the data for each of a set of competing models. We have also described how the exploration of the posterior distribution of rule sets using the provided GUI in the MITRE software package allows the user to evaluate the possibility of multiple informative rule sets and formulate biological hypotheses about the dataset. Although MITRE as currently implemented is primarily designed to use taxonomic abundance profiles derived from 16S rRNA amplicon sequencing or WGS metagenomics data as input—currently the most common data types in longitudinal microbiome studies—the model and software can readily incorporate other time series of additional data types, e.g., host physiological measurements, metabolomics, or metagenomics-derived functional profiles, as such data become more widely available. Other model features that could be readily added in the future include time-varying, continuous (e.g., quantitative host traits), and multiple host outcomes, e.g., multivariate host readouts such as blood pressure, blood glucose, and body weight. We have demonstrated that our software package MITRE overcomes unique challenges of linking microbiome time-series information to host outcomes, while drawing on a well-established tradition of rule-based techniques in machine learning and artificial intelligence [28,29,30], and can perform as well as or better than "black box" machine learning methods while maintaining interpretability. This latter feature is critical as microbiome analyses move into clinical applications, in which patients and physicians necessarily place a premium on transparency and interpretability of results. We have provided an open-source and user-friendly implementation of our method, which we expect will greatly aid investigators analyzing longitudinal host-microbiome studies and ultimately provide novel insights into the complex interplay between microbiome dynamics and host health and disease. Operation of the software and input data requirements The MITRE software is implemented in Python 2.7.3. MITRE and its dependencies are available through the Python Package Index, pypi.python.org, facilitating installation across multiple platforms. The software is run from the command line, with parameters and other inputs specified using a straightforward configuration file format. Each MITRE run requires four input files (for the standard case of 16S rRNA amplicon data): a table of OTU abundances in each sample, a table specifying the subject and time point associated with each sample, a table specifying the outcome (and optionally other data) associated with each subject, and phylogenetic placements of the OTUs on a reference tree. The user provides the three tables in comma-separated value format and phylogenetic placements in the .jplace format produced by pplacer. Alternative input data types, including taxonomic abundance profiles generated from WGS metagenomic data with Metaphlan, are described in the MITRE manual online. The output of the software, described in detail below, includes textual summaries of a single best set of rules (the point estimate) and the distribution of probable alternative rule sets, as well as an HTML file providing a graphical interface for interactive visualization of the results. Additional details of the method are found in Additional file 4. MITRE model details Mathematical basis of the MITRE model MITRE can be expressed as a hierarchical generative model that generates sets of rules of the form described above. The generative process, starting at the top of the hierarchy, is as follows: Sample a length K for the rule set For each rule k ∈ {1,…,K}, sample: A weight βk A rule length Lk (number of detectors in the rule) The detectors d ∈ {1,…,Lk}, drawing from a pre-defined pool of detectors (see below) according to a probability distribution which is parameterized by the time window and bacterial group (phylogenetic subtrees) to which each detector applies The rules are then weighted in a Bayesian logistic regression model to predict host status. To be precise, assume we have observations xijt for i = 1,2,…,Nsubjects of relative abundances of bacterial OTUs j = 1,2,…,NOTUs sampled at time points t = 1,2,…,Nsamples,i, as well as a binary status variable yi for each host. The MITRE probability model can then be expressed as: $$ {\displaystyle \begin{array}{c}\kern0.75em {y}_i\sim \mathrm{Bernoulli}\left({p}_i\right)\\ {}\kern0.75em {p}_i=\frac{1}{1+{e}^{-{\psi}_i}}\\ {}\psi ={\beta}_0+A\left(R,x\right)\overrightarrow{\beta}\\ {}\beta \mid R\sim \mathrm{Normal}\left(0,{\sigma}_b^2I\right)\\ {}\kern0.75em {\beta}_0\sim \mathrm{Normal}\left(0,{\sigma}_b^2\right)\\ {}R\sim \pi \left(R,x.\right)\end{array}} $$ Here, R is a set of rules, and A(R,x) is the matrix whose entry Aik is 1 if the data from subject i satisfies the conditions of all detectors in the kth rule in the set, and zero otherwise. Conditional on R, this is a standard Bayesian logistic regression model, whose covariates are the truth values of the rules in R (along with an offset term modeling baseline odds). MITRE also allows the optional inclusion of static non-microbiome covariates, see Additional file 3: Supplementary Note for complete details. The prior probability distribution over rule sets, π(R,x), is a mixture over the probability Θempty for an empty rule set R0 versus a truncated negative binomial distribution for the length of a non-empty rule set. For a non-empty rule set, the prior further models the distribution over detectors that comprise each rule, taking into account the length of the time window for the detector and an associated position on the phylogenetic tree. Hyperparameters for these priors, as well as priors on other variables, complete the model. A full specification of priors in MITRE and a discussion of sensitivity to the choice of hyperparameters are given in Additional file 3: Supplementary Note. Generation of pools of detectors from data MITRE generates a comprehensive pool of detectors from the supplied data and user-specified parameters tmin, tmax, Nw, and Nθ, as follows: Divide the duration of the experiment/observations into Nw equal basic time windows and enumerate all combinations of 1 or more consecutive basic time windows that are longer than tmin and shorter than tmax and during which at least one sample was collected for every subject. Within each such time window (t0, t1), for each bacterial group (phylogenetic subtree) j, calculate the average abundance of the group in each subject i and sort those values, obtaining v1 ≤ v2 ≤ … ≤ vNsubjects. If Nsubjects < Nθ + 1, for l = 1,2,…,Nsubjects-1, let θl = (vl + vl + 1)/2 and add the detectors "between t0 and t1, the average abundance of group j is above θl" and "between t0 and t1, the average abundance of group j is below θl" to the population. If instead Nsubjects > Nθ, hierarchically cluster the values v0, …, vNsubjects into the Nθ groups, let θl be the midpoint between cluster l and cluster l + 1, for l = 1,2,…, Nθ − 1, and add the detectors corresponding to those thresholds instead. Then, repeat the process for all combinations of one or more consecutive basic time windows longer than tmin and shorter than tmax during which at least two samples were collected for every subject, calculating the slope of the abundance of each group j in each subject i during each such window, and adding the detectors "If, between t0 and t1, the slope of the abundance of group j is [above/below] θl" to the population (again carrying out a clustering process to reduce the number of threshold values to Nθ, if needed.) The runtime of the inference algorithm (described below) depends approximately linearly on the size of the detector pool; thus, the choice of parameters tmin, tmax, Nw, and Nθ controls a tradeoff between high resolution (temporally, and in the space of threshold values for OTU abundance/slope) and performance. It is recommended to choose Nw as large as possible while ensuring that most basic time windows include at least one observation from every subject, and to set Nθ = 40; tmin should generally be set to 0 and tmax to either the duration of the study or (to enforce temporal localization of the rules in cases where, e.g., dramatic increases in abundance at a well-defined time also lead to notable increases in average abundance over the period of the entire study) half that duration, but may be adjusted if dynamics on a particular timescale are of a priori interest. Model inference We perform approximate Bayesian inference, to learn the posterior distribution over the model parameters including rule sets R and regression coefficients β. MITRE employs a custom Markov chain Monte Carlo (MCMC) algorithm, which alternates efficient updates of the regression coefficients using a Polya-Gamma auxiliary variable scheme [31], Metropolis-Hastings update steps that propose changes to the rule set R and updates to the hyperparameters governing the prior distribution over rule sets. The MCMC algorithm is described in detail in Additional file 3: Supplementary Note. Briefly, four types of updates to R are considered: A randomly chosen detector in R may be replaced by another detector from the pool. A randomly chosen detector in R may be removed from R (if it is the only detector in a rule, the rule is removed as well). A detector from the pool may be added to R, either to an existing rule chosen at random, or to form a new rule of length 1. A detector may be moved from one rule in R to another. For the analyses presented here, 50,000 samples were drawn from the posterior distribution (except for analyses of data from Vatanen et al. [6], where 25,000 samples were used, and Kostic et al. [5], where 100,000 samples were used.) Mixing of the MCMC sampler was assessed using the diagnostics described in Additional file 3: Supplementary Note; we recommend users employ these diagnostics to determine the appropriate number of samples needed for their particular studies. Run time depends on the size and complexity of the dataset; using a single Intel Xeon CPU (E5-2697 v3 2.60GHz), sampling took 45 min for the data of DiGiulio et al. [7], 23 h for the data of David et al. [3], 30 h for the data of Bokulich et al. [2], and 64.5 h for the data of Vatanen et al. [6]. Cross-validation was performed in parallel (onefold/core) requiring similar total elapsed time for each study. Data simulation We simulated from the Bokulich et al. [2] dataset using a parametric bootstrapping-style procedure. Simulated subjects were sampled with replacement from the set of control subjects in the real data. Equal numbers of simulated cases and control subjects were generated for each scenario. In order to have sufficient real data to bootstrap to evaluate the range of scenarios of interest, we excluded subjects with fewer than 13 time points sampled, who had no samples before 10 days, and who were studied for less than 600 days; this yielded 20 subjects for bootstrapping. We then truncated the data to an interval between 10 and 600 days, since this contained the densest sampling across subjects. Perturbations of duration 120 days, in time windows randomly occurring throughout the experiment, were introduced into randomly selected clade(s) to simulate cases, with the magnitude of the perturbation distributed among clade members according to the relative abundance of members in the original data. The duration of the perturbation(s) was chosen to be approximately of the order of that seen with the MITRE point rule on the real dataset. Perturbations were introduced randomly into the clades with the following characteristics: minimum average relative abundance of 0.1%, maximum average relative abundance of 20%, and a maximum of 30 OTUs in the clade. These parameters were chosen so as to provide a meaningful relative disturbance to other clades, but not to drastically disrupt the entire microbiome (which would present less of a challenge to the prediction algorithms.) The magnitude of perturbation(s) was sampled for each subject from log-normal distributions, with mean and variance of the order of that seen with the MITRE point rule on the real dataset (control log mean = − 6, control std. = 1.5, case log mean = − 3, case log std. = 1.5). When two perturbations were applied, each control subject received only one perturbation, whereas case subjects received both perturbations. Note that MITRE is a supervised learning (conditional) method, meaning that the microbiome data itself is not modeled; to simulate the time points not present in the original dataset, we therefore must introduce a model of microbiome dynamics. We model the underlying microbiome data as arising from latent time-dependent stochastic processes (Gaussian random walks): $$ {x}_{os}(t)\sim {\mathrm{Normal}}_{+}\left({x}_{os}\left(t-1\right),\Delta t{\tau}^2\right) $$ Here, xos(t) is the latent trajectory for OTU o in subject s at time t, and τ2 is the process variance parameter, which is empirically estimated from the real data as approximately the 75-percentile variance. We assume a Bayesian model and infer the posterior latent trajectories using a 1-step ahead MCMC algorithm similar to our previously described method [32], except in this case, trajectories are assumed to be independent of one another. The observed data cs(t), consisting of sequencing counts, is assumed to arise through a two-stage error model: $$ {\displaystyle \begin{array}{c}{y}_{os}(t)\sim {\mathrm{Normal}}_{+}\left({x}_{os}(t),\theta {x}_{os}(t)\right)\\ {}{m}_s(t)=\sum \limits_o{y}_{os}(t)\\ {}{C}_s(t)\sim \mathrm{DMD}\left({y}_s(t)/{m}_s(t),\alpha, N\right)\end{array}} $$ Here, DMD denotes the Dirichlet-Multinomial distribution with concentration parameter α and number of simulated sequencing reads per sample N; we use parameters estimated from data (α = 286; N = 50,000) as previously described [15]. The model thus provides temporal coherence through the Gaussian random walk latent trajectory, while modeling compositionality and over-inflation through the two-stage error model. Posterior samples from the model capture temporal trends seen in the real data (e.g., periods of time in which a particular OTU are increasing), but with randomness introduced so that subjects sampled with replacement look sufficiently different. For each scenario simulated (e.g., a particular number of time points or subjects), 10 independent simulations were performed. To investigate the effects of varying the number of subjects, we fixed the simulation at an intermediate number of time points (18) and simulated different numbers of subjects. Similarly, to investigate the effects of varying the number of time points, we fixed the simulation at an intermediate number of subjects (32) and simulated different numbers of time points. To facilitate comparisons, we sampled evenly spaced time points in all cases. The simulated data were then provided to the MITRE and the other methods in the same input format as for real data, as described below. Complete Python code to reproduce the simulations is available in the MITRE online repository. MITRE generates several summaries from the posterior samples obtained from the MCMC inference procedure described above. The point estimate is a single rule set R* (with coefficients β*) that summarizes the mode of the posterior distribution. If the posterior probability that R is empty is greater than 0.5, the point estimate is the empty rule set. Otherwise, to obtain a representative non-empty summary list, we determine the posterior mode d* of the total number of detectors in R and take R* to be the rule set with the highest posterior probability among all sampled rule sets that contain d* total detectors. The point estimate coefficients β* are the highest posterior probability sampled coefficients associated with R*. To provide an overview of the possible alternative rule sets learned by the model, rule sets in the posterior samples are clustered and a summary of the highest probability clusters is produced. The clustering process first forms clusters of detectors that apply to the average value or slope of the same variable in highly overlapping time windows (ignoring threshold values), then clusters together rules whose component detectors belong to the same clusters, and finally groups rule sets whose rules belong to the same clusters (see Additional file 3: Supplementary Note for a full description.) For each cluster, a representative rule set and the estimated posterior probability that R* belongs to the cluster are reported. Calculation of the Bayes factor for the empty rule set R0 (versus any non-empty R) and two additional types of Bayes factors, indicating the relevance of phylogenetic subtrees or time windows, is described in Additional file 3: Supplementary Note. Finally, MITRE generates an interactive graphical visualization of the posterior distribution of rule sets. A heatmap of the Bayes factors for leaf variable/basic time window combinations is rendered alongside the bacterial phylogeny (as in Fig. 3a, d); clicking on any cell allows the visualization of the detectors associated with the cell with the largest Bayes factors (as in Fig. 3b–c, e.) Data preprocessing and filtering MITRE offers a number of user-configurable options for preprocessing and filtering microbiome time-series data. The following procedure is recommended and was used for the results presented here (except as noted below): To remove potentially spurious rare OTUs, discard all OTUs with less than Ncounts,OTU observed across all samples (typically Ncounts,OTU = 10). To exclude samples where coverage is so low that abundance estimates for uncommon OTUs may be unreliable, discard all samples with less than Ncounts,sample total counts observed across all remaining OTUs (typically Ncounts,sample = 5000 for HiSeq/MiSeq data). If desired, to analyze only a particular time period (because, e.g., samples are not available outside that period for the majority of subjects) discard all samples before time ti and after time tf (by default, ti is the time of the earliest available sample, and tf the time of the latest sample). To exclude subjects for whom microbiome dynamics cannot be resolved at the desired temporal resolution throughout the entire study, discard the subjects with too few, or too sparse, observation points, by dividing the duration of the study into Nw,filter equal pieces and keeping only subjects with at least Ns samples in any Nc consecutive such pieces. Default values are Nw,filter = 10, Ns = 2, and Nc = 1; note that for data with very inconsistent sampling, these parameters must be chosen judiciously to maximize the number of subjects included while allowing an acceptable level of temporal resolution. After these steps, counts data are converted to relative abundance data for each sample, and, for each node in the phylogenetic tree, a relative abundance estimate is obtained by summing the relative abundances of its children. The following filtering steps are then applied to all taxa (both OTUs and higher nodes in the tree): To exclude low abundance taxa (for which abundance estimates may be inaccurate) or infrequently observed taxa (which we expect are unlikely to be explanatory, though higher taxa including them may be), discard all taxa except those that exceed a threshold abundance a in at least Na consecutive samples in at least Ni subjects. Typically, Na = 2. Appropriate values for a and Ni depend on the number of subjects and typical reads per sample; for studies with 10–150 subjects and average reads per sample on the order of 104, we recommend a = 10−4 and Ni = 4 or 10% of the number of subjects, whichever is larger. To minimize redundancy, discard all taxa corresponding to the nodes in the phylogenetic tree with exactly one remaining child taxon, as their temporal patterns are often very similar to those of their children. Note that when a large number of taxa are considered, the detector pool becomes large and the computational cost of the inference algorithm grows; if necessary, it is recommended to increase the stringency of steps 5 and 6 to keep the total number of taxa below 500. Bioinformatics and preprocessing for analyzed datasets For each 16S-based dataset to which MITRE was applied, the original 16S rRNA amplicon sequencing data was reprocessed to obtain tables of OTU abundances and phylogenetic placements for each OTU on a reference tree, using as consistent an analysis process as possible given the differences in sequencing methodology and the nature of the available data. Where possible, DADA2 1.1.5 [33] was used to trim, quality filter, merge, and remove chimeras from the reads, assign them to inferred true sequences, and classify the inferred sequences taxonomically (such classification is not necessary for MITRE application but is helpful for interpretation of the results.) Inferred sequences were then placed on a reference tree generated from full-length or near full-length (> 1200 nt) 16S rDNA sequences of type strains from the Ribosomal Database Project [34] using pplacer [19]. When quality scores for sequences were not available and DADA2 could not be used, sequences were instead processed using mothur 1.35.1 [35, 36] for denoising, quality filtering, alignment against the ARB Silva 16S gene sequence reference database, clustering into OTUs at 97% identity, and taxonomic classification. For the WGS metagenomics data of Kostic et al. [5], published taxonomic abundance tables were used directly as input data to MITRE, exploiting MITRE's built-in support for parsing Metaphlan result tables, described in the MITRE manual. Full details regarding the preprocessing of all datasets are described in Additional file 3: Supplementary Note. After reanalyzing the sequencing data for each study and excluding subjects with infrequent sampling (see description of filtering methods above), we applied MITRE and the comparator methods to classify the subjects according to relevant categories: membership in the Russian cohort (n = 30), elevated IgE levels (n = 28), diagnosis with any allergy (n = 49), any dietary allergy (n = 42), with dairy allergy (n = 32), or with egg allergy (n = 25) in the data of Vatanen et al. (n = 113 total for nationality; n = 109 total for all other outcomes); formula-dominant diet (n = 11) or Cesarean delivery (n = 13) in the data of Bokulich et al. (n = 35 total); seroconversion (n = 11) in the data of Kostic et al. (n = 19 total); premature delivery (n = 6) in the data of DiGiulio et al. (n = 37 total); and plant-based diet (n = 10) in the data of David et al. (n = 20 total.) Comparison methods To compare MITRE's predictive performance to alternative methods, each OTU's abundance data for each subject was averaged across all observations within each set of time windows, defined by dividing the experiment into Nw,comparison equal intervals and taking any consecutive Nc,comparison such intervals as a valid time window. Parameters were chosen to maximize temporal resolution while ensuring that each time window still contained at least one observation for each subject. Note that the same subjects were used (i.e., those not excluded by the preprocessing settings described above) for both MITRE and the comparison methods. For David et al. [3], Nw,comparison = 10 and Nc,comparison = 3; for Vatanen et al. [6], Nw,comparison = 9 and Nc,comparison = 4; for DiGiulio et al. [7], Nw,comparison = Nc,comparison = 1; for Bokulich et al. [2], Nw,comparison = 12 and Nc,comparison = 2; for Kostic et al. [5], Nw,comparison = 5 and Nc,comparison = 2. These averaged abundances were then used to train random forest or logistic regression classifiers using the Python package scikit-learn [37]. Random forest classifiers included 1024 trees (as larger numbers of trees were not found to improve classifier performance.) For logistic regression with L1 regularization, the regularization strength parameter was chosen using tenfold cross-validation from among a grid of logarithmically spaced options spanning the range 10−4 to 104. The MITRE software package is available at https://github.com/gerberlab/mitre/ [13] under a GNU GPL open-source license. All additional input files needed to reproduce the results presented here, and detailed output from all MITRE simulations discussed, are available at https://github.com/gerberlab/mitre_paper_results [38]. Datasets analyzed during the current study are available: data from Bokulich et al. [2], from ENA (https://www.ebi.ac.uk/ena/data/view/PRJEB14529) [39] and QIITA (https://qiita.ucsd.edu, study 10249) [40]; data from David et al. [3], from MG-RAST (http://metagenomics.anl.gov/linkin.cgi?project=mgp6248) [41]; data from Kostic et al. [5], via the DIABIMMUNE project website (https://pubs.broadinstitute.org/diabimmune/t1d-cohort) [42]; data from Vatanen et al. [6], via the DIABIMMUNE project website (https://pubs.broadinstitute.org/diabimmune/three-country-cohort) [43]; data from DiGiulio et al. [7] are expected to be available through the NCBI Sequence Read Archive (https://www.ncbi.nlm.nih.gov/sra) under BioProject PRJNA288562 [44] but were not available at the time of submission of this manuscript; these data are also available from the present authors upon reasonable request and with the permission of the authors of [7]. Gerber GK. The dynamic microbiome. FEBS Lett. 2014;588:4131–9. Bokulich NA, Chung J, Battaglia T, Henderson N, Jay M, Li H, DL A, Wu F, Perez-Perez GI, Chen Y, et al. Antibiotics, birth mode, and diet shape microbiome maturation during early life. Sci Transl Med. 2016;8:343ra382. David LA, Maurice CF, Carmody RN, Gootenberg DB, Button JE, Wolfe BE, Ling AV, Devlin AS, Varma Y, Fischbach MA, et al. Diet rapidly and reproducibly alters the human gut microbiome. Nature. 2014;505:559–63. Yassour M, Vatanen T, Siljander H, Hamalainen AM, Harkonen T, Ryhanen SJ, Franzosa EA, Vlamakis H, Huttenhower C, Gevers D, et al. Natural history of the infant gut microbiome and impact of antibiotic treatment on bacterial strain diversity and stability. Sci Transl Med. 2016;8:343ra381. Kostic AD, Gevers D, Siljander H, Vatanen T, Hyotylainen T, Hamalainen AM, Peet A, Tillmann V, Poho P, Mattila I, et al. The dynamics of the human infant gut microbiome in development and in progression toward type 1 diabetes. Cell Host Microbe. 2015;17:260–73. Vatanen T, Kostic AD, d'Hennezel E, Siljander H, Franzosa EA, Yassour M, Kolde R, Vlamakis H, Arthur TD, Hamalainen AM, et al. Variation in microbiome LPS immunogenicity contributes to autoimmunity in humans. Cell. 2016;165:842–53. DiGiulio DB, Callahan BJ, McMurdie PJ, Costello EK, Lyell DJ, Robaczewska A, Sun CL, Goltsman DS, Wong RJ, Shaw G, et al. Temporal and spatial variation of the human microbiota during pregnancy. Proc Natl Acad Sci U S A. 2015;112:11060–5. Romero R, Hassan SS, Gajer P, Tarca AL, Fadrosh DW, Bieda J, Chaemsaithong P, Miranda J, Chaiworapongsa T, Ravel J. The vaginal microbiota of pregnant women who subsequently have spontaneous preterm labor and delivery and those with a normal delivery at term. Microbiome. 2014;2:18. Raveh-Sadka T, Thomas BC, Singh A, Firek B, Brooks B, Castelle CJ, Sharon I, Baker R, Good M, Morowitz MJ, Banfield JF. Gut bacteria are rarely shared by co-hospitalized premature infants, regardless of necrotizing enterocolitis development. Elife. 2015;4:e05477. Zhou Y, Shan G, Sodergren E, Weinstock G, Walker WA, Gregory KE. Longitudinal analysis of the premature infant intestinal microbiome prior to necrotizing enterocolitis: a case-control study. PLoS One. 2015;10:e0118632. Pop M, Paulson JN, Chakraborty S, Astrovskaya I, Lindsay BR, Li S, Bravo HC, Harro C, Parkhill J, Walker AW, et al. Individual-specific changes in the human gut microbiota after challenge with enterotoxigenic Escherichia coli and subsequent ciprofloxacin treatment. BMC Genomics. 2016;17:440. van Rensburg JJ, Lin H, Gao X, Toh E, Fortney KR, Ellinger S, Zwickl B, Janowicz DM, Katz BP, Nelson DE, et al. The human skin microbiome associates with the outcome of and is influenced by bacterial infection. MBio. 2015;6:e01315. Bogart E, Creswell R, Gerber G. The microbiome interpretable temporal rule engine. Github. 2019. https://doi.org/10.5281/zenodo.2634301. Tandon D, Haque MM, Mande SS. Inferring intra-community microbial interaction patterns from metagenomic datasets using associative rule mining techniques. PLoS One. 2016;11:e0154493. Bucci V, Tzen B, Li N, Simmons M, Tanoue T, Bogart E, Deng L, Yeliseyev V, Delaney ML, Liu Q, et al. MDSINE: Microbial Dynamical Systems INference Engine for microbiome time-series analyses. Genome Biol. 2016;17:121. Subramanian S, Huq S, Yatsunenko T, Haque R, Mahfuz M, Alam MA, Benezra A, DeStefano J, Meier MF, Muegge BD, et al. Persistent gut microbiota immaturity in malnourished Bangladeshi children. Nature. 2014;510:417–21. Sze MA, Schloss PD. Looking for a signal in the noise: revisiting obesity and the microbiome. MBio. 2016;7(4):e01018–16. Teng F, Yang F, Huang S, Bo C, Xu ZZ, Amir A, Knight R, Ling J, Xu J. Prediction of early childhood caries via spatial-temporal variations of oral microbiota. Cell Host Microbe. 2015;18:296–306. Matsen FA, Kodner RB, Armbrust EV. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree. BMC Bioinformatics. 2010;11:538. Carmody RN, Gerber GK, Luevano JM Jr, Gatti DM, Somes L, Svenson KL, Turnbaugh PJ. Diet dominates host genotype in shaping the murine gut microbiota. Cell Host Microbe. 2015;17:72–84. Gerber GK, Onderdonk AB, Bry L. Inferring dynamic signatures of microbes in complex host ecosystems. PLoS Comput Biol. 2012;8:e1002624. Lu J, Shi P, Li H. Generalized linear models with linear constraints for microbiome compositional data. In: ArXiv e-prints; 2018. Crost EH, Tailford LE, Monestier M, Swarbreck D, Henrissat B, Crossman LC, Juge N. The mucin-degradation strategy of Ruminococcus gnavus: the importance of intramolecular trans-sialidases. Gut Microbes. 2016;7:302–12. Berry DA. Bayesian clinical trials. Nat Rev Drug Discov. 2006;5:27–36. Donald B, Rubin D. Fisher, Neyman, and Bayes at FDA. J Biopharm Stat. 2016;26:1020–4. Wasserstein R, Lazar N. The ASA's statement on p-values: context, process, and purpose. Am Stat. 2016;70:129–33. Kass R, Raftery A. Bayes factors. J Am Stat Assoc. 1995;90:773–95. Friedman J, Popescu B. Predictive learning via rule ensembles. Ann Appl Stat. 2008;2:916–54. Letham B, Rudin C, McCormick T, Madigan D. Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann Appl Stat. 2015;9:1350–71. Urbanowicz R, Ryan J, Moore J. Learning classifier systems: a complete introduction, review, and roadmap. J Artif Evol Appl. 2009;2009:1–25. Polson N, Scott J, Windle J. Bayesian inference for logistic models using Pólya–Gamma latent variables. J Am Stat Assoc. 2013;108:1339–49. Gibson T, Gerber G. Robust and scalable models of microbiome dynamics. In: Jennifer D, Andreas K, editors. Proceedings of the 35th International Conference on Machine Learning. Stockholm: Proceedings of Machine Learning Research; 2018. Callahan BJ, McMurdie PJ, Rosen MJ, Han AW, Johnson AJ, Holmes SP. DADA2: high-resolution sample inference from Illumina amplicon data. Nat Methods. 2016;13:581–3. Cole JR, Wang Q, Fish JA, Chai B, McGarrell DM, Sun Y, Brown CT, Porras-Alfaro A, Kuske CR, Tiedje JM. Ribosomal Database Project: data and tools for high throughput rRNA analysis. Nucleic Acids Res. 2014;42:D633–42. Schloss PD, Westcott SL, Ryabin T, Hall JR, Hartmann M, Hollister EB, Lesniewski RA, Oakley BB, Parks DH, Robinson CJ, et al. Introducing mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009;75:7537–41. Kozich JJ, Westcott SL, Baxter NT, Highlander SK, Schloss PD. Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq Illumina sequencing platform. Appl Environ Microbiol. 2013;79:5112–20. Pedregosa F, Varoquax G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2012;12(2011):2825–30. Bogart E, Creswell R, Gerber G. Supporting files for the microbiome interpretable temporal rule engine manuscript. Github. 2019. https://doi.org/10.5281/zenodo.3345235. Bokulich NA, Chung J, Battaglia T, Henderson N, Jay M, Li H, DL A, Wu F, Perez-Perez GI, Chen Y, et al. Antibiotics, birth mode, and diet shape microbiome maturation during early life. Datasets. Eur Nucleotide Arch. 2016. https://www.ebi.ac.uk/ena/data/view/PRJEB14529. Accessed 21 July 2019. Bokulich NA, Chung J, Battaglia T, Henderson N, Jay M, Li H, A DL, Wu F, Perez-Perez GI, Chen Y, et al: Antibiotics, birth mode, and diet shape microbiome maturation during early life. Datasets. QIITA. 2016. https://qiita.ucsd.edu/study/description/10249;. Accessed 21 July 2019. David LA, Maurice CF, Carmody RN, Gootenberg DB, Button JE, Wolfe BE, Ling AV, Devlin AS, Varma Y, Fischbach MA, et al: DietTimeSeries. Datasets. MG-RAST. 2013. https://www.mg-rast.org/linkin.cgi?project=mgp6248;. Accessed 21 July 2019. Kostic AD, Gevers D, Siljander H, Vatanen T, Hyotylainen T, Hamalainen AM, Peet A, Tillmann V, Poho P, Mattila I, et al: T1D cohort. Datasets. DIABIMMUNE. 2015. https://pubs.broadinstitute.org/diabimmune/t1d-cohort;. Accessed 21 July 2019. Vatanen T, Kostic AD, d'Hennezel E, Siljander H, Franzosa EA, Yassour M, Kolde R, Vlamakis H, Arthur TD, Hamalainen AM, et al: Three country cohort. Datasets. DIABIMMUNE. 2016. https://pubs.broadinstitute.org/diabimmune/three-country-cohort. Accessed 21 July 2019. DiGiulio DB, Callahan BJ, McMurdie PJ, Costello EK, Lyell DJ, Robaczewska A, Sun CL, Goltsman DS, Wong RJ, Shaw G, et al: Temporal and spatial variation of the human microbiota during pregnancy. Datasets. NCBI. 2018. https://www.ncbi.nlm.nih.gov/bioproject/PRJNA288562;. Accessed 21 July 2019 We thank Daniel DiGiulio for assistance with the data from reference [7] and Travis Gibson for helpful comments on the manuscript. This work was supported by the Brigham and Women's Hospital Precision Medicine Initiative and NIH 1R01GM130777. EB received support from NIH T32HL007627. Massachusetts Host-Microbiome Center, Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, 60 Fenwood Road, Boston, MA, USA Elijah Bogart, Richard Creswell & Georg K. Gerber Present address: Kintai Therapeutics, Inc., 26 Landsdowne Street Suite 450, Cambridge, MA, 02139, USA Elijah Bogart Richard Creswell Georg K. Gerber EB and GKG conceived the method, developed the theory, and wrote the manuscript. EB wrote the software, curated the relevant data, and formally analyzed and validated the method and software. RC contributed to the software and analysis and validation of the method. All authors read and approved the final manuscript. Correspondence to Georg K. Gerber. GKG is a Strategic Advisory Board Member of Kaleido Biosciences and had a sponsored research agreement with the company, and is a Scientific Advisory Board Member, co-founder, and shareholder of ConsortiaTX. No funding for the present work was provided by either company. EB is an employee and shareholder of Kintai Therapeutics, which provided no funding for the present work. RC declares that he has no competing interests. Additional file 1: Table S1. Classification problems to which MITRE and comparator methods were applied, and performance of the methods applied to each. (PDF 100 kb) Table S2. Technical details of the application of MITRE and comparator methods to each classification problem. (PDF 111 kb) Supplementary Note: additional technical details of the MITRE method and analyses run. (PDF 324 kb) Mathematical Appendix: mathematical details of the MITRE model and inference method. (PDF 255 kb) Bogart, E., Creswell, R. & Gerber, G.K. MITRE: inferring features from microbiota time-series data linked to host status. Genome Biol 20, 186 (2019). https://doi.org/10.1186/s13059-019-1788-y Received: 21 November 2018 Microbiome Biology
CommonCrawl
Vol. 33, Issue 2 pp.93-196 Vol. 33, Issue 1 pp.1-92 Standing Waves of the Coupled Nonlinear Schrödinger Equations L. L. Yang & G. M. Wei 10.4208/ata.2014.v30.n4.1 Anal. Theory Appl., 30 (2014), pp. 345-353. Preview Purchase PDF 1086 19825 Abstract In this paper, we study the existence of standing waves of the coupled nonlinear Schrödinger equations. The proofs of which rely on the Lyapunov-Schmidt methods and contraction mapping principle are due to F. Weinstein in [1]. Approximation of the Cubic Functional Equations in Lipschitz Spaces A. Ebadian, N. Ghobadipour, I. Nikoufar & M. Eshaghi Gordji Let $\mathcal{G}$ be an Abelian group and let $\rho:\mathcal{G} \times \mathcal{G} \rightarrow [0, \infty)$ be a metric on $\mathcal{G}$. Let $\varepsilon$ be a normed space. We prove that under some conditions if $f:\mathcal{G}\to\varepsilon$ is an odd function and $C_x:\mathcal{G}\to\varepsilon$ defined by $C_x(y):=2f(x+y)+2f(x-y)+12f(x)-$ $f(2x+y)-f(2x-y)$ is a cubic function for all $x\in \mathcal{G},$ then there exists a cubic function $C:\mathcal{G}\to\varepsilon$ such that $f-C$ is Lipschitz. Moreover, we investigate the stability of cubic functional equation $2f(x+y)+2f(x-y)+12f(x)-f(2x+y)$ $-f(2x-y)=0$ on Lipschitz spaces. The Boundedness of the Commutator for Riesz Potential Associated with Schrödinger Operator on Morrey Spaces Dongxiang Chen & Liang Song Let $\mathcal{L}=-\Delta+V$ be the Schrödinger operator on $\mathbb{R}^d$, where $\Delta$ is the Laplacian on $\mathbb{R}^{d}$ and $V\ne0$ is a nonnegative function satisfying the reverse Hölder's inequality. The authors prove that Riesz potential $\mathcal{J}_{\beta}$ and its commutator $[b,\mathcal{J}_{\beta}]$ associated with $\mathcal{L}$ map from $M_{\alpha,v}^{p,q}$ into $M_{\alpha,v}^{p_1,q_1}$. Subordination Results for $p$-Valent Meromorphic Functions Associated with a Linear Operator A. O. Mostafa & M. K. Aouf In this paper, by making use of the Hadamard products, we obtain some subordination results for certain family of meromorphic functions defined by using a new linear operator. $L^q$ Inequalities and Operator Preserving Inequalities M. Bidkham & S. Ahmadi Let $\mathbb{P}_n$ be the class of polynomials of degree at most $n$. Rather and Shah [15] proved that if $P\in \mathbb{P}_n$ and $P(z)\neq 0$ in $|z| < 1$, then for every $R > 0$ and 0 $\leq q < \infty, $ $$| B[P(Rz)]|_q \leq \frac{| R^{n}B[z^n] +\lambda_0 |_{q}}{| 1+z^n|_q} | P(z)|_q,$$where $B$ is a $ B_{n}$-operator. In this paper, we prove some generalization of this result which in particular yields some known polynomial inequalities as special. We also consider an operator $D_{\alpha}$ which maps a polynomial $P(z)$ into $D_{\alpha} P(z) := n P(z) + ( \alpha - z ) P' (z)$ and obtain extensions and generalizations of a number of well-known $L_{q}$ inequalities. Some Characterizations of $VMO(\mathbb{R}^n)$ Y. Ding & T. Mei In this paper we give three characterizations of $VMO(\mathbb{R}^n)$ space, which are of John-Nirenberg type, Uchiyama-type and Miyachi-type, respectively. Some Results on the Simultaneous Approximation M. R. Haddadi In this paper, we give some results on the simultaneous proximinal subset and simultaneous Chebyshev in the uniformly convex Banach space. Also we give relation between fixed point theory and simultaneous proximity. On Weighted Approximation by Modified Bernstein Operators for Functions with Singularities D. S. Yu & M. L. Wang Della Vecchia et al. (see [2]) introduced a kind of modified Bernstein operators which can be used to approximate functions with singularities at endpoints on $[0,1].$ In the present paper, we obtain a kind of pointwise Stechkin-type inequalities for weighted approximation by the modified Bernstein operators. Nonconstant Harmonic Functions on the Level 3 Sierpinski Gasket D. L. Tang & R. Hu We give a detailed description of nonconstant harmonic functions on the level 3 Sierpinski gasket. Then we extend the method on $\beta$-set with $1/3< \beta < 1/2$. Inequalities for the Polar Derivatives of a Polynomial B. A. Zargar 10.4208/ata.2014.v30.n4.10 Let $P(z)$ be a polynomial of degree $n,$ having all its zeros in $|z|\leq 1.$ In this paper, we estimate $kth$ polar derivative of $P(z)$ on $|z|=1$ and thereby obtain compact generalizations of some known results which among other things yields a refinement of a result due to Paul Turán.
CommonCrawl
Applied Water Science February 2019 , 9:15 | Cite as Modelling of the impact of water quality on the infiltration rate of the soil Balraj Singh Parveen Sihag Surinder Deswal First Online: 14 January 2019 The concept behind of this paper is to check the potential of the three regression-based techniques, i.e. M5P tree, support vector machine (SVM) and Gaussian process (GP), to estimate the infiltration rate of the soil and to compare with two empirical models, i.e. Kostiakov model and multi-linear regression (MLR). Totally, 132 observations were obtained from the laboratory experiments, out of which 92 observations were used for training and residual 40 for testing the models. A double-ring infiltrometer was used for experimentation with different concentrations (1%, 5%, 10% and 15%) of impurities and different types of water quality (ash and organic manure). Cumulative time (Tf), type of impurities (It), concentration of impurities (Ci) and moisture content (Wc) were the input variables, whereas infiltration rate was considered as target. For SVM and GP regression, two kernel functions (radial based kernel and Pearson VII kernel function) were used. The results from this investigation suggest that M5P tree technique is more precise as compared to the GP, SVR, MLR approach and Kostiakov model. Among GP, SVR, MLR approach and Kostiakov model, MLR is more accurate for estimating the infiltration rate of the soil. Thus, M5P tree is a technique which could be used for modelling the infiltration rate for the given data set. Sensitivity analyses suggest that the cumulative time (Tf) is the major influencing parameter on which infiltration rate of the soil depends. Double-ring infiltrometer Gaussian process Support vector regression M5P tree model The process in which water moves into the soil through the top surface soil is called the infiltration, and the rate by which it enters into the soil is called the infiltration rate (Haghighi et al. 2010). It plays the important role in the hydrologic cycle. There are many factors which influence the infiltration rate, that is, rainfall intensity, suction head, humidity, water content, types of impurities, field density and humidity. It is associated with the surface runoff and groundwater recharge (Uloma et al. 2014) and also helpful in water supply system, landslides, design of irrigation, flood control system and drainage (Igbadun and Idris 2007). With the help of infiltration rate, we can easily find out sorptivity and unsaturated hydraulic conductivity of the soil (Chow et al. 1988; Scotter et al. 1982). Hydraulic properties of soil are necessary for design of drainage system (Brooks and Corey 1964). At catchment level, infiltration characteristic is one of the dominant factors in determining the flooding condition (Bhave and Sreeja 2013). The soil capacity of infiltration affects the amount of surface flow (Diamond and Shanley 2003). Infiltration rate in soil is inversely proportion to the water-holding capacity of soil (Singh et al. 2014). Physical changes of soil also affect the infiltration rate (Gupta and Gupta 2008; Smith 2006: Micheal 1978). Water quality of soil is also affected the infiltration rate and ultimately affected the natural and artificial ground water recharge. Generally, there are many impurities present in the earth surface which can easily mix with the water and changes the quality of the water. Many people studied about the concept of the water quality and infiltration. Singh et al. (2017) used the two types of impurities (ash and organic manure) in his study with three soft computing techniques (M5P model tree, artificial neural network and random forest) and found that random forest predicts the infiltration rate well as compared to the other methods. Sihag (2018) studied the infiltration rate by mixing different proportions fly ash and rice husk ash in sand with fuzzy logic and artificial neural network and found that artificial neural network outperforms the fuzzy logic. Singh et al. (2017a, 2018) and Sihag and Singh 2018 utilised various infiltration models (empirical model) in his study to calculate the infiltration rate of the soil for the given study area. Tiwari et al. (2017) used the generalised regression neural network, MLR, M5P model tree and SVM to predict the cumulative infiltration of soil and found that SVM works well than the other techniques. Various researchers have been used various soft computing techniques in hydraulics and environmental engineering applications (Sihag et al. 2017b, c, 2018a; Haghiabi et al. 2018; Nain et al. 2018a; Tiwari et al. 2018; Parsaie et al. 2017a, b; Shiri et al. 2016, 2017; Parsaie and Haghiabi 2015, 2017; Parsaie 2016; Azamathulla et al. 2016; Baba et al., 2013). These researchers found that these techniques work exceptionally well. Keeping it in the view, the focus of this investigation is on the prediction of the infiltration rate by using M5P tree, GP, MLR and SVM. Furthermore, the results were also compared with the empirical model (Kostiakov 1932) and sensitivity analysis was performed to find out the most important influencing parameter for predicting the infiltration rate of the soil. Soft computing techniques The soft computing technique is one of the most relevant and modern techniques used in the civil engineering problems (Sihag et al. 2018b, c; Nain et al. 2018b, Haghiabi et al. 2017; Kisi et al. 2017; Parsaie et al. 2017c; Kisi et al. 2015; Parsaie and Haghiabi 2014; Shiri and Kisi 2012). In this investigation, GP, SVM and M5P tree models were used. The description of the GP, SVM and M5P tree is given below. Gaussian process (GP) regression GP regression relies upon the postulation that nearby observation must share the information mutually and it is an approach for mentioning earlier straight over the function space. The simplification of Gaussian distribution is known as Gaussian regression. The matrix and vector of Gaussian distribution are expressed as covariance and mean in GP regression. Due to having earlier knowledge of function reliance and data, the validation for generalisation is not essential. The GP regression models are capable of recognising the foresee distribution consequent to the input test data (Rasmussen and Williams 2006). A GP is the collection of numbers of random variable, and any finite number of them has a collective multivariate Gaussian distribution. Assuming u and v stand for input and output domain accordingly, thereupon × pairs (gi, hi) are drawn freely and equivalently distribution. For regression, it is assumed that h ⊆ Re; then, a GP on p is expressed by the mean function v0: u Re and covariance function µ: u × u Re. Readers are requested to follow the Kuss (2006a, b) to get the exhaustive details of GP. Support vector machine (SVM) This method was first proposed by Vapnik (1998) and based on statistical learning theory. Main principle of SVM is optimal separation of classes. From the separable classes, SVM selects the one which have lowest generalisation error from infinite number of linear classifier or set upper limit to error which is generated by structural risk minimisation. In this way, the maximum margin between two classes can be found from the selected hyperplane and sum of distances of the hyperplane from the nearby point of two classes will set highest margin between two classes. Readers are requested to follow the Smola (1996) to get the exhaustive details of SVM. Cortes and Vapnik (1995) gave the idea of kernel function for nonlinear support vector regression. M5P tree M5P tree (Quinlan 1992) is a binary decision tree that uses linear regression function at the leaf (terminal node) which helps in predicting continuous numerical attributes. This method involves two stages for generation of model tree. First stage consists of splitting criteria to generate a decision tree. Splitting criteria for this method are based on treating the standard deviation of class value. Splitting process causes less standard deviation in child node as compared to parent node and thus considered as pure (Quinlan 1992). Out of all possible splits, M5P tree chooses the one that maximises the error reductions. This process of splitting the data may overgrow the tree which may cause over fitting. So, the next stage involves in removing over fitting using pruning method. It trims overgrown trees by substituting the subtrees with linear regression function. In this technique of tree generation, parameter space is split into surfaces and building a linear regression model in each of them. M5P tree algorithm utilises standard deviation of the class value reaching at terminal node which measures the error value at that node and evaluates the expected reduction in error. Standard reduction is given as $${\text{SDR}} = {\text{sd}}(N) - \sum \frac{{\left| {N_{i} } \right|}}{\left| N \right|}{\text{sd}}(N_{i } )$$ where N depicts a set of examples that arrive at the node. Ni depicts ith outcome of subset of examples of potential set, and sd is the standard deviation. Conventional models In this investigation, two conventional models were used. The description of the conventional models was listed below. Multi-linear regression (MLR) The parameters for multi-linear regression analysis include f(t) with Tf, It, Ci and Wc; therefore, a following functional relationship may be initially assumed: $$f(t) = k\;T_{\text{f}}^{a} \cdot I_{\text{t}}^{b} \cdot C_{\text{i}}^{c} \cdot W_{\text{c}}^{d}$$ where k is the proportionality constant. $$\log \;f(t) = \log \;k + a\;\log \;T_{\text{f}} + b\;\log \;I_{\text{t}} + c\;\log \;C_{\text{i}} + d\;\log \;W_{\text{c}} ({\text{Taking log}})$$ There are four explanatory variables in the multi-linear equation. Now to develop a multi-linear model, log f (t) is taken as the output parameter and the four explanatory variables, namely log Tf, log It, log Ci and log Wc, are taken as input parameters. The output of the multi-linear regression provided the values of k, a, b, c and d and, in turn, the developed equation of the form (3). The developed multi-linear regression equation is as follows: $$f(t) = 104\left( {\frac{{I_{\text{t}}^{1.310} \cdot C_{\text{i}}^{0.007} }}{{T_{\text{f}}^{0.66} \cdot W_{\text{c}}^{0.270} }}} \right)$$ where It is 1 for ash and 2 for organic manure. Kostiakov model The details of the Kostiakov model (Kostiakov 1932) are as follows: $$f(t) = aT_{\text{f}}^{ - b}$$ where a and b are constants. After solving Eq. (5) with the measure infiltration rate with time, it will become in the form of Eq. (6). $$f(t) = 114.6T_{\text{f}}^{ - 0.68}$$ In this investigation, two double-ring infiltrometers were used to calculate the infiltration rate of soil. These consist of two rings, i.e. inner ring and outer ring, with diameter 300 mm and 450 mm, respectively, as shown in Fig. 1. The instrument was driven 100 mm into the soil out of 300 mm which is the total depth of the instrument and it was done with the fallen weight type hammer strike uniformly without disturbing the top layer of the soil. Both the rings were filled with equal depth of water and note down the initial depth of water in inner ring because the water from the inner ring went downwards directly not laterally. The moisture content of soil was also calculated before each experiment by using gravitational method. Double-ring infiltrometer The experimentations were done in the Hydraulics Laboratory, NIT Kurukshetra, India. The soil of NIT Kurukshetra is loam soil, and elevation from the sea level is 274 m. The climate of the Kurukshetra is cold in winter and dry in the summer except the monsoon season (normal annual rainfall 582 mm). Infiltration rate was calculated with water quality which was the mixture of the water and different concentrations of impurities, i.e. 1%, 5%, 10% and 15%, and different types of impurities, i.e. ash and organic manure, which are a by-product and generally present in the study area. Two double-ring infiltrometers were driven into the soil paralleled to each other: One is filled with fixed concentration of ash and other with organic manure. Furthermore, infiltration rate was measured up to a fixed time interval which is 180 min because after 180 min, infiltration rate attains the steady infiltration rate (Sihag et al. 2017). The details of the experimental procedure along with the range of the infiltration rate are summarised in Table 1, and plot for the infiltration rate versus time with ash and organic manure is depicted in Fig. 2. As indicated in Table 1 and Fig. 2, infiltration rate is inversely proportional to the time. Initial infiltration rate of the water with impurities ash was higher than water with impurities organic impurities, but final infiltration rate of the soil of water having organic manure was higher than the ash. In case of the organic manure, infiltration rate increases with time when time reached to 90 min; then, it decreased with time (Singh 2015). Details of the experimental procedure along with the range of infiltration rate Time (min.) Type of impurities Concentration of impurity (%) Water content (%) Range of the infiltration rate (mm/h) Organic manure, ash 3.83, 8.43, 10.16, 11.51, 13.65 Result analysis of the infiltration rate with different water qualities: a ash and b organic manure The experiments for the measured infiltration rate were performed in between January 2015 and May 2015 in Hydraulics Laboratory, Civil Engineering Department (NIT Kurukshetra). The geographical co-ordinates of the study area are 29.9490° N and 76.8173° E. The soil present in the campus is poorly permeable which has the low tendency of the infiltration rate. Totally, 132 observations were obtained from the field experiments out of which 92 observations were used for training and residual 40 for testing the models. Cumulative time (min), type of impurities (organic manure/ash), concentration of impurities (%) and moisture content (%) were the input variables, whereas infiltration rate (mm/h) was output. The features and correlation matrix of the data set are given in Tables 2 and 3. Features of the data used Kurtosis Skewness Training data set Tf (min.) Ci (%) − 1.3709 Wc (%) f(t) (mm/h) Testing data set All data set Correlation matrix of input data set Detail of kernel functions The SVM- and GP-based regression approaches design includes the scheme of kernel function. There are several kernel functions in GP and SVM. In this study, two kernel functions were used with GP and SVM technique. Radial basis kernel (RBF) = \(e^{{ - \gamma \left| {a - b} \right|^{2} }}\) Pearson VII kernel function (PUK) = \(\left( {{1 \mathord{\left/ {\vphantom {1 {\left[ {1 + \,\left( {{{2\sqrt {\left\| {a\, - \,b} \right\|}^{2} \sqrt {2^{{\left( {{1 \mathord{\left/ {\vphantom {1 \omega }} \right. \kern-0pt} \omega }} \right)}} - \,1} \,} \mathord{\left/ {\vphantom {{2\sqrt {\left\| {a\, - \,b} \right\|}^{2} \sqrt {2^{{\left( {{1 \mathord{\left/ {\vphantom {1 \omega }} \right. \kern-0pt} \omega }} \right)}} - \,1} \,} \sigma }} \right. \kern-0pt} \sigma }} \right)^{2} } \right]}}} \right. \kern-0pt} {\left[ {1 + \,\left( {{{2\sqrt {\left\| {a\, - \,b} \right\|}^{2} \sqrt {2^{{\left( {{1 \mathord{\left/ {\vphantom {1 \omega }} \right. \kern-0pt} \omega }} \right)}} - \,1} \,} \mathord{\left/ {\vphantom {{2\sqrt {\left\| {a\, - \,b} \right\|}^{2} \sqrt {2^{{\left( {{1 \mathord{\left/ {\vphantom {1 \omega }} \right. \kern-0pt} \omega }} \right)}} - \,1} \,} \sigma }} \right. \kern-0pt} \sigma }} \right)^{2} } \right]}}^{\omega } } \right)\) where γ, σ and ω are kernel parameters. It is well known that GP and SVM estimation performance depends on a good setting of meta-parameters, parameters Gaussian noise, C, γ, σ and ω. The selections of Gaussian noise, C, γ, σ and ω control the prediction (regression) model complexity. In this study, a physical method was used to select primary parameters (i.e. C, γ, σ, ω and Gaussian noise). In order to minimise the RMSE and to maximise the CC, suitable values of various primary parameters are selected. The same kernel-specific parameters were taken for GP regression as well as for SVM. Table 4 enlists all the optimal values of the primary parameters for GP, SVM and M5P tree model. Primary parameters using GP, SVM and M5P tree Primary parameters m = 4 GP with RBF Gaussian noise = 0.80, γ = 3.5 GP with PUK Gaussian noise = 0.80, ω = 0.02, σ = 0.5 SVM with RBF C = 2, γ = 3.5 SVM with PUK C = 2, ω = 0.02, σ = 0.5 Statistical performance evaluation criteria Correlation coefficient (CC) and root-mean-square error (RMSE) values were calculated to investigate the performance of GP, SVM and M5P tree modelling approaches. Coefficient of correlation (CC) The coefficient of correlation (CC) is computed as $${\text{CC}} = \frac{{m\mathop \sum \nolimits_{i = 1}^{m} o_{i} t_{i} {-}\left( {\mathop \sum \nolimits_{i = 1}^{m} o_{i} } \right)\left( {\mathop \sum \nolimits_{i = 1}^{m} t_{i} } \right)}}{{\sqrt {m\left( {\mathop \sum \nolimits_{i = 1}^{m} o_{i}^{2} } \right) - \left( {\mathop \sum \nolimits_{i = 1}^{m} o_{i} } \right)^{2} } \sqrt {m\left( {\mathop \sum \nolimits_{i = 1}^{m} o_{i}^{2} } \right) - \left( {\mathop \sum \nolimits_{i = 1}^{m} t_{i} } \right)^{2} } }}$$ Root-mean-square error (RMSE) The root-mean-square error (RMSE) is computed as: $${\text{RMSE}} = \sqrt {\frac{1}{m}\left( {\sum\nolimits_{i = 1}^{m} {\left( {o_{i} - t_{i} } \right)} } \right)^{2} }$$ where oi is the calculated values of infiltration rate, ti is the estimated values of infiltration rate and m is the number of observations. This section of this investigation focuses on predicting performance of the proposed three soft computing techniques, i.e. GP, SVM and M5P tree, and two empirical models, i.e. MLR and Kostiakov model. The ability of these soft computing models is depending upon the primary parameters, and the values of primary parameters are listed in Table 4. In this study, input variables were Tf, It, Ci and Wc and output was f(t). The results of these soft computing techniques with empirical model are given in Table 5. Results of the different modelling approaches and empirical models for training and testing data set RMSE GP_RBF GP_PUK SVM_RBF SVM_PUK MLR Figure 3 gives the scattered details of actual and predicted values of the infiltration rate of the soil by GP regression with RBF and PUK kernel function. It is clear from Fig. 3 that both the kernels function failed to predict a good result for the infiltration rate of the soil. But in comparison with RBF and PUK kernel of GP regression, RBF kernel function works well with CC and RMSE 0.4374 and 14.9329 (refer Table 5), respectively. Predicted infiltration rate of soil using GP_RBF and GP_PUK The same data set was also used for SVM-based regression techniques. Figure 4 gives the scattered details of the infiltration rate of the soil by using SVM regression techniques with RBF and PUK kernel function. Same like GP regression techniques, SVM also failed to predict the good quality of result for the infiltration rate of the soil. But the results from the SVM techniques were little bit good from the GP regression techniques. The values of CC and RMSE for RBF kernel with SVM were 0.5278 and 14.1891, respectively (refer Table 4). Predicted infiltration rate of soil using SVR_RBF and SVR_PUK The prediction of infiltration rate of the soil by M5P tree techniques, multi-linear regression and Kostiakov model was also performed by the same data set. Figure 5 gives the scattered details of the infiltration rate of the soil by using MLR, Kostiakov model and M5P tree. It is also clear from Fig. 5 that all the scatters from the M5P tree model are nearby to agreement line than the other two models. Also, the value of CC is much high (0.8490) and RMSE is much less (9.4356) than the other two techniques. In comparison with MLR and Kostiakov model, the prediction of the MLR and Kostiakov model was almost same with CC (0.4405 and 0.4806) and RMSE (15.9657 and 15.0521), respectively. Predicted infiltration rate of soil by using MLR, M5P tree and Kostiakov model Comparison of the results A comparison of all the techniques and models was done to find out the most efficient technique in prediction of the infiltration rate of the soil. The performance of the M5P tree is good with performance evaluation parameters (CC = 0.8490 and RMSE = 9.4356 mm/h) than the other model and techniques, while among GP and SVM, SVM with RBF kernel outperforms than other kernel function with values of CC and RMSE 0.5278 and 14.1891 mm/h, respectively. Table 6 gives the statistical information of actual and predicted values of the infiltration rate with different soft computing techniques and empirical models. Statistical information of the infiltration rate with different soft computing techniques and empirical models − 0.7493 − 1.61 Figure 6 provides a plot of MLR, M5P tree and Kostiakov model with actual infiltration rate in increasing order of the values. This figure suggests that the predicted values using M5P model follows same path which is followed by actual values. But when the actual values of infiltration rate were very high, prediction from all the techniques gave the large error because there were large fluctuations in the infiltration rate at starting point. Hence, it is clear from Table 4 and Fig. 6 that M5P tree was the best technique which can be predicted the values of the infiltration rate in the absence of the infiltration data under the same conditions. Variation of infiltration rate using the different regression approaches Sensitivity analysis (SA) SA is the test in which we find the most important input parameter or parameters which affect the infiltration rate of the soil most. In this investigation, sensitivity analysis was done by removing the one parameter one by one in each case. M5P tree model was used to carry out the sensitivity analysis by using same primary parameters. Table 7 summarises the results of the sensitivity analysis. Outcomes from Table 7 suggest that cumulative time is the most important parameter to predict the infiltration rate of the soil for this data set. Sensitivity analysis using M5P tree model Combinations of the variables Parameter removed M5P tree model RMSE (mm/h) Tf, It, Ci, Wc It, Ci, Wc Tf, Ci, Wc Tf, It, Wc C i Tf, It, Ci Knowledge of infiltration process is essential for agriculture, hydrologic study, watershed management, irrigation system design and drainage design. In this investigation, three soft computing techniques (SVR, GP and M5P tree) and two empirical models (MLR and Kostiakov model) were used to estimate the infiltration rate of the soil with different water qualities. The obtained results concluded that the M5P tree model is the most efficient model to predict the infiltration rate of the soil with different water qualities than the SVR, GP, MLR and Kostiakov model, whereas the results of SVM were more suitable as compared to the GP and MLR and also gave better prediction than Kostiakov model. Thus, M5P tree model was the most suitable model for predicting the infiltration rate of the soil. Finally, SA suggests that cumulative time is an essential parameter which affects the infiltration rate of the soil with different water qualities using M5P model tree for this data set. Azamathulla HM, Haghiabi AH, Parsaie A (2016) Prediction of side weir discharge coefficient by support vector machine technique. Water Sci Technol Water Supply 16(4):1002–1016Google Scholar Baba APA, Shiri J, Kisi O, Fard AF, Kim S, Amini R (2013) Estimating daily reference evapotranspiration using available and estimated climatic data by adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Hydrol Res 44(1):131–146Google Scholar Bhave S, Sreeja P (2013) Influence of initial soil condition on infiltration characteristics determined using a disk infiltrometer. ISH J Hydraul Eng 19(3):291–296Google Scholar Brooks RH, Corey AT (1964) Hydraulic properties of porous media and their relation to drainage design. Trans ASAE 7(1):26–0028Google Scholar Chow VT, Maidment DR, Mays LW (1988) Applied hydrology. McGraw-Hill, New YorkGoogle Scholar Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297Google Scholar Diamond J, Shanley T (2003) Infiltration rate assessment of some major soils. Ir Geogr 36(1):32–46Google Scholar Gupta BL, Gupta AMIT (2008) Water resources systems and management, 2nd edn. Standard Publishers Distributors, Delhi, pp 510–535Google Scholar Haghiabi AH, Parsaie A, Ememgholizadeh S (2017) Prediction of discharge coefficient of triangular labyrinth weirs using Adaptive Neuro Fuzzy inference system. Alexandria Eng J 57:1773–1782Google Scholar Haghiabi AH, Nasrolahi AH, Parsaie A (2018) Water quality prediction using machine learning methods. Water Qual Res J 53(1):3–13Google Scholar Haghighi F, Gorji M, Shorafa M, Sarmadian F, Mohammadi MH (2010) Evaluation of some infiltration models and hydraulic parameters. Span J Agric Res 8(1):210–217Google Scholar Igbadun HE, Idris UD (2007) Performance evaluation of infiltration models in a hydromorphic soil. Niger J Soil Environ Res 7(1):53–59Google Scholar Kisi O, Shiri J, Karimi S, Shamshirband S, Motamedi S, Petkovic D, Hashim R (2015) A survey of water level fluctuation predicting in Urmia Lake using support vector machine with firefly algorithm. Appl Math Comput 270:731–743Google Scholar Kisi O, Keshavarzi A, Shiri J, Zounemat-Kermani M, Omran EE (2017) Groundwater quality modeling using neuro-particle swarm optimization and neuro-differential evolution techniques. Hydrol Res 48(6):1508–1519Google Scholar Kostiakov AN (1932) On the dynamics of the coefficient of water percolation in soils and the necessity of studying it from the dynamic point of view for the purposes of amelioration. Trans Sixth Comm Int Soc Soil Sci 1:7–21Google Scholar Kuss M (2006) Gaussian process models for robust regression, classification, and reinforcement learning. Doctoral dissertation, Ph.D. thesis, Technische Universität, DarmstadtGoogle Scholar Kuss M (2006) Gaussian process models for robust regression, classification, and reinforcement learning. Doctoral dissertation, Technische UniversitätGoogle Scholar Micheal AM (1978) Irrigation, theory and practice. Vikas Press Private Limited, New DelhiGoogle Scholar Nain SS, Garg D, Kumar S (2018a) Investigation for obtaining the optimal solution for improving the performance of WEDM of super alloy Udimet-L605 using particle swarm optimization. Eng Sci Technol Int J 21(2):261–273Google Scholar Nain SS, Sihag P, Luthra S (2018b) Performance evaluation of fuzzy-logic and BP-ANN methods for WEDM of aeronautics super alloy. MethodsX 5(1):890–908. https://doi.org/10.1016/j.mex.2018.04.006 Google Scholar Parsaie A (2016) Predictive modeling the side weir discharge coefficient using neural network. Model Earth Syst Environ 2(2):63Google Scholar Parsaie A, Haghiabi A (2014) Predicting the side weir discharge coefficient using the optimized neural network by genetic algorithm. Sci J Pure Appl Sci 3(3):103–112Google Scholar Parsaie A, Haghiabi A (2015) The effect of predicting discharge coefficient by neural network on increasing the numerical modeling accuracy of flow over side weir. Water Resour Manag 29(4):973–985Google Scholar Parsaie A, Haghiabi AH (2017) Improving modelling of discharge coefficient of triangular labyrinth lateral weirs using SVM, GMDH and MARS techniques. Irrig Drain 66(4):636–654Google Scholar Parsaie A, Azamathulla HM, Haghiabi AH (2017a) Prediction of discharge coefficient of cylindrical weir–gate using GMDH-PSO. ISH J Hydraulic Eng 24:116–123Google Scholar Parsaie A, Najafian S, Omid MH, Yonesi H (2017b) Stage discharge prediction in heterogeneous compound open channel roughness. ISH J Hydraulic Eng 23(1):49–56Google Scholar Parsaie A, Yonesi H, Najafian S (2017c) Prediction of flow discharge in compound open channels using adaptive neuro fuzzy inference system method. Flow Meas Instrum 54:288–297Google Scholar Quinlan JR (1992, November) Learning with continuous classes. In: 5th Australian joint conference on artificial intelligence, vol 92. pp 343–348Google Scholar Rasmussen CE, Williams CK (2006) Gaussian processes for machine learning, vol 1. MIT Press, Cambridge, p 248Google Scholar Scotter DR, Clothier BE, Harper ER (1982) Measuring saturated hydraulic conductivity and sorptivity using twin rings. Soil Res 20(4):295–304Google Scholar Shiri J, Kisi O (2012) Estimation of daily suspended sediment load by using wavelet conjunction models. ASCE J Hydrol Eng 17(9):986–1000Google Scholar Shiri J, Shamshirband S, Kisi O, Karimi S, Bateni SM, HosseiniNazhad SH, Hashemi A (2016) Prediction of water-level in the Urmia lake using the extreme learning machine approach. Water Resour Manag 30:5217–5229Google Scholar Shiri J, Keshavarzi A, Kisi O, Karimi S (2017) Using soil easily measured parameters for estimating soil water capacity: soft computing approaches. Comput Electron Agric 141:327–339Google Scholar Sihag P (2018) Prediction of unsaturated hydraulic conductivity using fuzzy logic and artificial neural network. Model Earth Syst Environ 4:189–198Google Scholar Sihag P, Singh B (2018) Field evaluation of infiltration models. Technogenic Ecol Saf 4(2/2018):3–12Google Scholar Sihag P, Tiwari NK, Ranjan S (2017a) Estimation and inter-comparison of infiltration models. Water Sci 31(1):34–43Google Scholar Sihag P, Tiwari NK, Ranjan S (2017b) Modelling of infiltration of sandy soil using gaussian process regression. Model Earth Syst Environ 3(3):1091–1100Google Scholar Sihag P, Tiwari NK, Ranjan S (2017c) Prediction of unsaturated hydraulic conductivity using adaptive neuro-fuzzy inference system (ANFIS). ISH J Hydraul Eng. https://doi.org/10.1080/09715010.2017.1381861 Google Scholar Sihag P, Jain P, Kumar M (2018a) Modelling of impact of water quality on recharging rate of storm water filter system using various kernel function based regression. Model Earth Syst Environ 4:61–68Google Scholar Sihag P, Singh B, Vand AS, Mehdipour V (2018b) Modeling the infiltration process with soft computing techniques. ISH J Hydraul Eng. https://doi.org/10.1080/09715010.2018.1464408 Google Scholar Sihag P, Tiwari NK, Ranjan S (2018c) Support vector regression-based modeling of cumulative infiltration of sandy soil. ISH J Hydraul Eng. https://doi.org/10.1080/09715010.2018.1439776 Google Scholar Singh B (2015) Impact of water quality on infiltration rate of soil. M.Tech. dissertation, National Institute of Technology KurukshetraGoogle Scholar Singh B, Sihag P, Singh D (2014) Study of infiltration characteristics of locally soils. J Civ Eng Environ Technol 1:9–13Google Scholar Singh B, Sihag P, Singh K (2017) Modelling of impact of water quality on infiltration rate of soil by random forest regression. Model Earth Syst Environ 3(3):999–1004Google Scholar Singh B, Sihag P, Singh K (2018) Comparison of infiltration models in NIT Kurukshetra campus. Appl Water Sci 8(2):63Google Scholar Smith B (2006) The farming handbook. University of Natal Press, PietermaritzburgGoogle Scholar Smola AJ (1996) Regression estimation with support vector learning machines. Doctoral dissertation, Master's thesis, Technische Universität MünchenGoogle Scholar Tiwari NK, Sihag P, Ranjan S (2017) Modeling of infiltration of soil using adaptive neuro-fuzzy inference system (ANFIS). J Eng Technol Educ 11(1):13–21Google Scholar Tiwari NK, Sihag P, Kumar S, Ranjan S (2018) Prediction of trapping efficiency of vortex tube ejector. ISH J Hydraul Eng. https://doi.org/10.1080/09715010.2018.1441752 Google Scholar Uloma AR, Samuel AC, Kingsley IK (2014) Estimation of Kostiakov's infiltration model parameters of some sandy loam soils of Ikwuano–Umuahia, Nigeria. Open Trans Geosci 1(1):34–38Google Scholar Vapnik V (1998) Statistical learning theory. Wiley, New YorkGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Civil Engineering DepartmentNational Institute of Technology, HamirpurHamirpurIndia 2.Civil Engineering DepartmentNational Institute of Technology, KurukshetraKurukshetraIndia Singh, B., Sihag, P. & Deswal, S. Appl Water Sci (2019) 9: 15. https://doi.org/10.1007/s13201-019-0892-1 Received 29 May 2017 Accepted 03 January 2019 First Online 14 January 2019 Published in cooperation with
CommonCrawl
New coupling conditions for isentropic flow on networks Vilmos Komornik 1,2, , Anna Chiara Lai 3, and Paola Loreti 3,, College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, People's Republic of China Département de mathématique, Université de Strasbourg, 7 rue René Descartes, 67084 Strasbourg Cedex, France Sapienza Università di Roma, Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Via A. Scarpa, 16, 00161, Roma, Italy * Corresponding author: Paola Loreti Received June 2019 Revised June 2020 Published August 2020 Fund Project: The first author was supported by the Visiting Professor Programme, Sapienza Università di Roma Figure(1) We investigate the simultaneous observability of infinite systems of vibrating strings or beams having a common endpoint where the observation is taking place. Our results are new even for finite systems because we allow the vibrations to take place in independent directions. Our main tool is a vectorial generalization of some classical theorems of Ingham, Beurling and Kahane in nonharmonic analysis. Keywords: Fourier series, non-harmonic analysis, wave equation, strings, beams, observability. Mathematics Subject Classification: Primary: 93B07; Secondary: 35L05, 74K10, 42A99. Citation: Vilmos Komornik, Anna Chiara Lai, Paola Loreti. Simultaneous observability of infinitely many strings and beams. Networks & Heterogeneous Media, 2020, 15 (4) : 633-652. doi: 10.3934/nhm.2020017 K. Ammari and M. Jellouli, Stabilization of star-shaped networks of strings,, Differential Integral Equations, 17 (2004), 1395-1410. Google Scholar K. Ammari and S. Farhat, Stability of a tree-shaped network of strings and beams,, Math. Methods Appl. Sci., 41 (2018), 7915-7935. doi: 10.1002/mma.5255. Google Scholar K. Ammari and S. Nicaise, Stabilization of Elastic Systems by Collocated Feedback,, Lecture Notes in Mathematics, 2124. Springer, Cham, 2015. doi: 10.1007/978-3-319-10900-8. Google Scholar C. Baiocchi, V. Komornik and P. Loreti, Ingham type theorems and applications to control theory, Bol. Un. Mat. Ital. B, 2 (1999), 33-63. Google Scholar C. Baiocchi, V. Komornik and P. Loreti, Généralisation d'un théorème de Beurling et application à la théorie du contrôle, C. R. Acad. Sci. Paris Sér. I Math., 330 (2000), 281-286. doi: 10.1016/S0764-4442(00)00116-6. Google Scholar C. Baiocchi, V. Komornik and P. Loreti, Ingham–Beurling type theorems with weakened gap conditions, Acta Math. Hungar., 97 (2002), 55-95. doi: 10.1023/A:1020806811956. Google Scholar J. M. Ball and M. Slemrod, Nonharmonic Fourier series and the stabilization of distributed semilinear control systems,, Comm. Pure Appl. Math., 32 (1979), 555-587. doi: 10.1002/cpa.3160320405. Google Scholar A. Barhoumi, V. Komornik and M. Mehrenberger, A vectorial Ingham–Beurling theorem, Ann. Univ. Sci. Budapest. Eötvös Sect. Math., 53 (2010), 17-32. Google Scholar A. Beurling, Interpolation for an Interval in ${\mathbb R}^1$, in The Collected Works of Arne Beurling, Vol. 2. Harmonic Analysis (eds. L. Carleson, P. Malliavin, J. Neuberger and J. Wermer) Contemporary Mathematicians. Birkhäuser Boston, Inc., Boston, MA, 1989. Google Scholar [10] J. W. S. Cassels, An Introduction to Diophantine Approximation, Cambridge Tracts in Mathematics and Mathematical Physics, No. 45. Cambridge University Press, New York, 1957. Google Scholar R. Dáger and E. Zuazua, Controllability of star-shaped networks of strings, C. R. Acad. Sci. Paris Sér. I Math., 332 (2001), 621-626. doi: 10.1016/S0764-4442(01)01876-6. Google Scholar R. Dáger and E. Zuazua, Wave Propagation, Observation and Control in 1-d Flexible Multi-structures, , Springer Science & Business Media, Vol. 50, Springer-Verlag, Berlin, 2006. doi: 10.1007/3-540-37726-3. Google Scholar A. Haraux, Séries lacunaires et contrôle semi-interne des vibrations d'une plaque rectangulaire, J. Math. Pures Appl., 68 (1989), 457-465. Google Scholar A. E. Ingham, Some trigonometrical inequalities with applications in the theory of series, Math. Z., 41 (1936), 367-379. doi: 10.1007/BF01180426. Google Scholar S. Jaffard, M. Tucsnak and E. Zuazua, On a theorem of Ingham. Dedicated to the memory of Richard J. Duffin, J. Fourier Anal. Appl., 3 (1997), 577-582. doi: 10.1007/BF02648885. Google Scholar S. Jaffard, M. Tucsnak and E. Zuazua, Singular internal stabilization of the wave equation, J. Differential Equations, 145 (1998), 184-215. doi: 10.1006/jdeq.1997.3385. Google Scholar J.-P. Kahane, Pseudo-périodicité et séries de Fourier lacunaires, Ann. Sci. de l'E.N.S., 79 (1962), 93-150. doi: 10.24033/asens.1108. Google Scholar V. Komornik, Exact Controllability and Stabilization. The Multiplier Method, Collection RMA, vol. 36, Masson–John Wiley, Paris–Chicester, 1994. Google Scholar V. Komornik and P. Loreti, Fourier Series in Control Theory, Springer-Verlag, New York, 2005. Google Scholar V. Komornik and P. Loreti, Multiple-point internal observability of membranes and plates, Appl. Anal., 90 (2011), 1545-1555. doi: 10.1080/00036811.2011.569497. Google Scholar J. E. Lagnese, G. Leugering and E. J. P. G. Schmidt, Modeling, Analysis and Control of Dynamic Elastic Multi-Link Structures, Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 1994. doi: 10.1007/978-1-4612-0273-8. Google Scholar J.-L. Lions, Exact controllability, stabilizability, and perturbations for distributed systems, Siam Rev., 30 (1988), 1-68. doi: 10.1137/1030001. Google Scholar J.-L. Lions, Contrôlabilité Exacte et Stabilisation de Systèmes Distribués I-II, Masson, Paris, 1988. Google Scholar P. Loreti, On some gap theorems, European Women in Mathematics–Marseille 2003, 39–45, CWI Tract, 135, Centrum Wisk. Inform., Amsterdam, 2005. Google Scholar M. Mehrenberger, Critical length for a Beurling type theorem, Bol. Un. Mat. Ital. B, 8 (2005), 251-258. Google Scholar E. Sikolya, Simultaneous observability of networks of beams and strings, Bol. Soc. Paran. Mat., 21 (2003), 31–41. doi: 10.5269/bspm.v21i1-2.7505. Google Scholar Q. Wu, The smallest Perron numbers,, Mathematics of Computation, 79 (2010), 2387-2394. doi: 10.1090/S0025-5718-10-02345-8. Google Scholar Figure 1. A system of three strings with vibration planes $ \sf{p_j} $ spanned by $ d_j: = (\ell_j,\phi_j,\theta_j) $ and $ v_j\perp d_j $, $ j = 1,2,3 $. In (i) $ \ell_1 = \ell_2 = \ell_3 = 1 $, the $ v_j $'s are pairwise orthogonal, and $ v_1 = d_3 $, $ v_2 = d_1 $, $ v_3 = d_2 $. We have $ T_0 = 2\max\left\lbrace {\ell_1,\ell_2,\ell_3}\right\rbrace = 2 $. In (ii), we have $ \ell_1 = \ell_3 = 1 $, $ \ell_2 = 2/(2+\sqrt{2}) $ and $ v_1 = d_3\perp d_1 = v_2 = v_3 $. Then $ T_0 = 2\max\left\lbrace {\ell_1+\ell_2,\ell_3}\right\rbrace\approx 3.1715 $. In the planar case (iii) we have $ \ell_1 = 1 $, $ \ell_2 = 2/(2+\sqrt{2}) $ and $ \ell_3 = 2/(4+\sqrt{2}) $, so that $ T_0 = 2\max\left\lbrace {\ell_1+\ell_2+\ell_3}\right\rbrace\approx 3.9103 $ Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004 Kateřina Škardová, Tomáš Oberhuber, Jaroslav Tintěra, Radomír Chabiniok. Signed-distance function based non-rigid registration of image series with varying image intensity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1145-1160. doi: 10.3934/dcdss.2020386 Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, Stock price fluctuation prediction method based on time series analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 915-915. doi: 10.3934/dcdss.2019061 Yanan Li, Zhijian Yang, Na Feng. Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021018 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052 Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Charlotte Rodriguez. Networks of geometrically exact beams: Well-posedness and stabilization. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021002 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 Hongyu Cheng, Shimin Wang. Response solutions to harmonic oscillators beyond multi–dimensional brjuno frequency. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020222 Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020444 Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 PDF downloads (71) Vilmos Komornik Anna Chiara Lai Paola Loreti
CommonCrawl
Methodology Article Identification of gene pairs through penalized regression subject to constraints Rex Shen1, Lan Luo2 & Hui Jiang ORCID: orcid.org/0000-0003-2718-98112 This article concerns the identification of gene pairs or combinations of gene pairs associated with biological phenotype or clinical outcome, allowing for building predictive models that are not only robust to normalization but also easily validated and measured by qPCR techniques. However, given a small number of biological samples yet a large number of genes, this problem suffers from the difficulty of high computational complexity and imposes challenges to the accuracy of identification statistically. In this paper, we propose a parsimonious model representation and develop efficient algorithms for identification. Particularly, we derive an equivalent model subject to a sum-to-zero constraint in penalized linear regression, where the correspondence between nonzero coefficients in these models is established. Most importantly, it reduces the model complexity of the traditional approach from the quadratic order to the linear order in the number of candidate genes, while overcoming the difficulty of model nonidentifiablity. Computationally, we develop an algorithm using the alternating direction method of multipliers (ADMM) to deal with the constraint. Numerically, we demonstrate that the proposed method outperforms the traditional method in terms of the statistical accuracy. Moreover, we demonstrate that our ADMM algorithm is more computationally efficient than a coordinate descent algorithm with a local search. Finally, we illustrate the proposed method on a prostate cancer dataset to identify gene pairs that are associated with pre-operative prostate-specific antigen. Our findings demonstrate the feasibility and utility of using gene pairs as biomarkers. In biomedical research, gene identification has been critical towards understanding and predictive modeling, whose activities are associated with biological phenotype, disease status, or clinical outcome. These genes, referred to as biomarkers, are further utilized for predictive modeling to facilitate scientific investigation, clinical diagnosis, prognosis, and treatment developments. In this discovery process, the expression levels of candidate genes are measured through genomic techniques enabling thousands of genes simultaneously, permitting monitoring molecular variation on a genome-wide scale [1] and providing more precise and reliable diagnosis [2]. As widely used techniques, DNA microarray, parallel qPCR, and RNA-Seq measure gene expression at the mRNA level. Yet, two major issues emerge with regard to the utilization of gene expression. First, the number of genes greatly exceeds that of biological samples typically, with tens of thousands of genes in the presence of up to only a few hundred biological samples or observations. As a result, inference tends to be unstable, misleading, or even invalid due to high statistical uncertainty, in addition to extremely high cost of computation. This, in turn, demands reliable and accurate methods of identification. Second, prior to any analysis, raw gene expressions must be normalized to compensate for differences in labeling, sample preparation, and detection methods. A common practice focuses on normalization of each sample's raw expression based on remaining ones in the same dataset, known as between-sample normalization, often in the forms of sample-wise scaling in RNA-Seq data [3]. However, such a normalization requires recomputation when a new sample is removed or added from the dataset, imposing computational challenges for large studies. Moreover, any analysis using selected genes based on one dataset may be sensitive to normalization, leading to non-generalizable and/or non-reproducible scientific findings [4]. To address the foregoing challenges, a modeling method based on gene pairs was first presented in the top-scoring pair (TSP) classifier by [5] and later implemented by [6]. Compared to predictors based on individual genes, gene pair-based predictors are more robust to normalization and have better predicting or classifying accuracy. Another advantage of gene-pair based predictive modeling is its ease of evaluation and validation by qPCR methods. Ideally, to use qPCR to measure a single gene's expression level, one applies the delta-Ct method [7], in which the differenced Ct values between the gene of interest and another housekeeping gene such as GAPDH measures gene expression level. However, between sample variation of a housekeeping gene may be large, imposing a great challenge [8]. In this sense, gene-pair based modeling removes the requirement of housekeeping genes since the differenced Ct values between the two genes of interest can be directly treated as a measurement. Consequently, the two genes in a gene pair serve as internal controls for each other. Due to all these advantages, gene pair-based predictors have been adopted in several cancer studies [9–11]. Despite the advantage of the gene pair approach, due to the combinatorial complexity, identifying the best gene pair, or best combinations of several gene pairs, is statistically and computationally challenging, from all the possible pairs from a pool of tens of thousands of genes. For instance, the TSP algorithm employs a direct search, whose running time grows quadratically in terms of the number of candidate genes. Although in practice one can first identify differentially expressed genes and then perform a restrictive search to these individual genes, such a two-step approach is no longer invariant to normalization and may miss informative pairs in which at most one gene is differentially expressed [5]. The computational problem is even more severe when more than one gene pair is sought, such as in k-TSP which involves exactly k top disjoint pairs in prediction [12]. Moreover, even though rank-based gene pair predictors like the TSP are robust to normalization, their utility in modeling complex data remains limited. One possible extension is to use ratios of gene expression levels as predictors and use regression models to select gene pairs. In recent years, regression models with penalties enforcing sparsity (such as the Lasso [13], SCAD [14], and TLP [15] penalties) have been widely used for variable selection, and many efficient algorithms have been proposed for fitting such models. One may employ such an approach by treating ratios of gene expression levels from all possible gene pairs as candidate predictors. However, this amounts to a quadratic complexity in the number of candidate genes. In this paper, we develop a new regression approach and an efficient algorithm for identifying gene pairs associated with biological phenotype or clinical outcome. we propose an equivalent model subject to a sum-to-zero constraint on regression coefficients, where the correspondence between nonzero coefficients in these models is established. The model of this type has been proposed for compositional data [16] and recently for reference point insensitive data [17]. One salient aspect is that this model is more parsimonious, involving only predictors linearly in the number of candidate genes. To deal with the constraint, we develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) [18, 19], for identification and model parameter estimation. The new approach shares not only the benefit of simplicity in interpretation but also a linear complexity. Most importantly, the proposed method substantially improves the statistical accuracy and computation efficiency. Finally, in simulations, the method compares favorably against the traditional method in terms of the accuracy of identification, and our ADMM algorithm is more computationally efficient than a coordinate descent with local search (CD+LS) algorithm of [17]. High-dimensional linear regression This section proposes predictive models based on combinations of ratios of gene expression levels on the ground that ratios of gene expression levels not only are robust to normalization but also can be easily validated and measured by qPCR techniques. Given p predictors (x 1,…,x p ) measuring the expression levels of p genes (g 1,…,g p ), we consider informative second-order interactions defined by pairwise ratios {x j /x k ,1≤j<k≤p} of (x 1,…,x p ) with respect to a continuous response such as the pre-operative prostate-specific antigen level measured from prostate cancer patients, as demonstrated in the "Results" section. It is assumed that there are only a small number (i.e., much smaller than p) of informative genes. Now consider a regression model in which response Y depends on a predictor vector x in a linear fashion: $$\begin{array}{@{}rcl@{}} Y = f(\boldsymbol{z})+ \epsilon \equiv \boldsymbol{\alpha}^{T} \boldsymbol{z} + \epsilon; \quad \epsilon \sim N(0,\sigma^{2}); \quad \end{array} $$ where α=(α 12,α 13,…,α p−1p )T and z=(log(x 1/x 2), log(x 1/x 3), …, log(x p−1/x p ))T are \({q=\!\frac {p(p-1)}{2}}\)-dimensional vectors of regression coefficients and predictors, and ε is random error that is independent of z. For convenience, for i<j, we let α ji =−α ij . In Eq. (1), primary reasons for the logarithm of the pairwise ratios {x j /x k ,1≤j<k≤p} are two-fold. First, it stabilizes the variance of gene expression levels so that Eq. (1) is suitable. In fact, the logarithm transformation is widely used in the literature on gene expression modeling [20]. Second, it facilitates an efficient model fitting algorithm to be introduced subsequently. Our objective is to identify nonzero coefficients of α corresponding to informative gene pairs based on gene expression. There are several challenges for identification of informative ratios within the framework of Eq. (1), in which p may greatly exceed the sample size n, known as high-dimensional regression. Normally, one may apply a feature selection method such as the Lasso [13] for this task. Unfortunately, however, high-dimensionality of Eq. (1) impedes the accuracy of feature selection in the presence of noise in addition to computational cost, which are roughly proportional to p 2. To overcome these difficulties, we propose an alternative yet equivalent model of Eq. (1) through a more parsimonious representation involving one linear constraint. The next proposition says that f(z) in Eq. (1) has an equivalent representation with only p-variables. In a sense, it achieves the objective of dimensionality reduction. The following equivalent form of f(z) in Eq. (1) is as follows: $$\begin{array}{@{}rcl@{}} f(\boldsymbol{z})= \sum_{j=1}^{p} \beta_{j} \log x_{j}, \quad \beta_{j}=\sum_{k\neq j}\alpha_{jk}. \end{array} $$ Importantly, \(\sum _{j=1}^{p} \beta _{j}=0\). Based on Proposition 1, we derive an equivalent model of Eq. (1): $$\begin{array}{@{}rcl@{}} Y = \boldsymbol{\beta}^{T} \tilde{\boldsymbol{x}} + \epsilon, \quad \sum_{j=1}^{p} \beta_{j}=0, \quad \epsilon \sim N(0,\sigma^{2}); \end{array} $$ where \(\tilde {\boldsymbol {x}}=(\log x_{1},\ldots,\log x_{p})\) and β=(β 1,…,β p )T. Most critically, the correspondence between coefficients of α and β is established by Eq. (2), where Eq. (1) and Eq. (3) can have different number of nonzero coefficients, which is because of the reparametrization and the sum-to-zero constraint. For instance, suppose that α 12=3, α 23=−2, and α ij =0 otherwise in Eq. (1), then β 1=3, β 2=−5, β 3=2, and β j =0 otherwise in Eq. (3). Model Eq. (3) has been proposed for compositional data [16] and recently also for reference point insensitive data [17]. Here [16] established model selection consistency and bounds for the resulting estimator. In contrast to Eq. (1), Eq. (3) contains only p instead of \(\frac {p(p-1)}{2}\) predictors, subject to the sum-to-zero constraint for the regression coefficients. In other words, model Eq. (3) is more parsimonious than model Eq. (1) in terms of the number of active parameters in a model. As a result, there can not be a one-to-one correspondence between α and β. It is shown in Eq. (2) that the value of β in Eq. (3) is uniquely determined by that of α in Eq. (1). The inverse does not hold – many values of α in Eq. (1) correspond to the same value of β in Eq. (3). The non-existence of one-to-one correspondence between α and β is due to the fact that model Eq. (1) is largely non-identifiable. In fact, for any cycle formed by sequence i 1,i 2,…,i k ,i k+1=i 1, we can add any constant c to the \(\alpha _{i_{j}i_{j+1}}\)'s formed by adjacent indices without changing the model. That is, we can construct α ′ where \(\alpha ^{\prime }_{i_{j}i_{j+1}}=\alpha _{i_{j}i_{j+1}}+c\) for j=1,…,k and \(\alpha ^{\prime }_{ij}=\alpha _{ij}\) otherwise and model Eq. (1) with α is equivalent to that with α ′. Therefore, when we obtain a solution β by solving Eq. (3), due to the argument above, there are an infinite number of α that are related to β through Eq. (2). Among them, we would like to choose the "simplest" one. In this paper, we define the "simplest" α to be the one(s) with the minimum L 1 norm, where the L 1-norm of a vector y=(y 1,…,y p ) is \(||\boldsymbol {y}||_{1}=\sum _{i=1}^{p}|y_{i}|\). In practice, given an estimate of β from (3), an estimate of α can be obtained using Algorithm 1 below. (Peeling) Given an estimate of \(\boldsymbol {\beta }, \hat {\boldsymbol {\beta }}=\left (\hat {\beta }_{1},\cdots, {\hat {\beta }}_{p}\right)^{T}\) satisfying the sum-to-zero constraint \(\sum _{j=1}^{p}\hat \beta _{j}=0\), initialize \(\tilde {\boldsymbol {\beta }}\) as \(\hat {\boldsymbol {\beta }}\) and \(\hat {\boldsymbol {\alpha }}\) as \({\hat {\alpha }}_{j,k}=0\) for all 1≤j<k≤p. Step 1: Identify one positive and one negative \(\tilde {\beta }_{j}\)'s, say \(\tilde {\beta }_{k_{1}}>0\) and \(\tilde {\beta }_{k_{2}}<0\), where k 1 and k 2 are two distinct indices from {1,⋯,p}. For instance, \(\tilde {\beta }_{k_{1}}\) and \(\tilde {\beta }_{k_{2}}\) can be taken as the most positive and most negative (ties can be broken arbitrarily) \(\tilde {\beta }_{j}\)'s. This can always proceed as long as not all \(\tilde {\beta }_{j}\)'s are zero. Step 2: Set \(\hat \alpha _{k_{1} k_{2}}=\min \left (|\tilde {\beta }_{k_{1}}|, |\tilde {\beta }_{k_{2}}|\right)\). For instance, \(\tilde {\beta }_{1} = 1.5\) and \(\tilde {\beta }_{2}=-0.5\), then set \(\hat {\alpha }_{12}=0.5\). Step 3: Subtract \(\hat {\alpha }_{k_{1} k_{2}}\) from \(\tilde {\beta }_{k_{1}}\) and \(-\hat {\alpha }_{k_{1} k_{2}}\) from \(\tilde {\beta }_{k_{2}}\) to make one of them zero, that is, \(\tilde {\beta }_{k_{1}}\leftarrow \tilde {\beta }_{k_{1}}-\hat {\alpha }_{k_{1} k_{2}}\) and \(\tilde {\beta }_{k_{2}} \leftarrow \tilde {\beta }_{k_{2}}+\hat {\alpha }_{k_{1} k_{2}}\). In the previous example, \(\tilde {\beta }_{1} \leftarrow 1\) and \(\tilde {\beta }_{2} \leftarrow 0\). Step 4: Repeat Steps 1–3 until all components of \(\tilde {\boldsymbol {\beta }}\) become zero. Algorithm 1 terminates in at most p steps because the number of nonzeros in \(\tilde {\boldsymbol {\beta }}\) decreases by either 1 or 2 after each iteration. Note that \(\tilde {\beta }_{k_{1}}\) and \(\tilde {\beta }_{k_{2}}\) identified in Step 1 may not be unique. Therefore it may lead to different \(\hat {\boldsymbol {\alpha }}\)'s. Importantly, this algorithm always yields a minimum L 1-norm α estimate (see Proposition 5 later in this section). The following two propositions characterize properties of such α with respect to its representations. (Minimum L 1-norm of α) Given α and β satisfying Eq. (2), the following conditions are equivalent: For all α ′ satisfying (2), ||α||1≤||α ′||1. For all 1≤i,j,k≤p, i≠j, j≠k, α ij α jk ≤0. \(||\boldsymbol {\alpha }||_{1}=\frac 12||\boldsymbol {\beta }||_{1}\). For all α ′≠α satisfying Eq. (2), ||α||1<||α ′||1. The conditions in Proposition 2 are met by α. Furthermore, there does not exist distinct (j 1,k 1,j 2,k 2) such that \(\alpha _{j_{1}k_{1}}\neq 0\) and \(\alpha _{j_{2}k_{2}}\neq 0\) simultaneously. There exists j such that \(|\beta _{j}|=\sum _{i\neq j}|\beta _{i}|\). Correspondingly, α ij =β i for all i≠j and α ik =0 for all i≠j, k≠j. The following proposition establishes the relations between the numbers of nonzero elements of α and β under different settings. Assume that α and β satisfy Eq. (2). Let \(A=||\boldsymbol {\alpha }||_{0}=\sum _{1 \leq j < k \leq p} I(|\alpha _{jk}| \neq 0)\) and \(B=||\boldsymbol {\beta }||_{0}=\sum _{j=1}^{p} I(|\beta _{j}| \neq 0)\) denote the numbers of nonzero elements of α and β, respectively. Then, B≤2A. If α and β satisfy the conditions in Proposition 2, then \(2\sqrt {A}\leq B\leq 2A\). If α and β satisfy the conditions in Proposition 3 and α≠0 and β≠0, then B=A+1. In view of condition (H) in Proposition 4, if those conditions of Proposition 2 are met with B=2 or 3, then those of Proposition 3 must be satisfied. For any \(\hat {\boldsymbol {\beta }}\) satisfying the sum-to-zero constraint, the corresponding \(\hat {\boldsymbol {\alpha }}\) produced by Algorithm 1 satisfies Proposition 2. The proofs of the propositions are supplied in Appendix. Constrained penalized likelihood Given model Eq. (3), a random sample of n observations \((\tilde {\boldsymbol {x}}_{i}, Y_{i})_{i=1}^{n}\) is obtained, based on which the log-likelihood function l(β) can be written as \(l(\boldsymbol {\beta })=\frac {-1}{2 \sigma ^{2}} \sum _{i=1}^{n} \left (Y_{i}-{\boldsymbol {\beta }}^{T} \tilde {\boldsymbol {x}}_{i}\right)^{2}\). In a high-dimensional situation, model Eq. (3) is overparameterized when p>n, and hence that l(β) has multiple maximizers. Towards this end, we introduce a constrained penalized likelihood as a generalization of the Lasso regression using L 1-regularization. $$\begin{array}{@{}rcl@{}} \text{min} -l(\boldsymbol{\beta}) + \lambda \sum_{j=1}^{p} |\beta_{j}|, \quad \text{subject to} \sum_{j=1}^{p} \beta_{j}=0. \end{array} $$ Minimization of Eq. (4) in β yields its minimizer \(\hat {\boldsymbol {\beta }}\). Since the term σ 2 can be absorbed into the regularization coefficient λ in the penalized likelihood, we set σ=1 in the objective function for simplicity. In contrast to the Lasso problem, Eq. (4) has one additional linear constraint. The coordinate descent algorithm has been shown to be very efficient for solving L 1-penalized problems [21] since the nondifferentiable L 1 penalty is separable with respect to β j 's. However, the sum-to-zero constraint destroys the separability so that the coordinate descent algorithm can no longer guarantee convergence. In [17], the authors proposed adding additional diagonal moves and random local search to the coordinate descent algorithm, which improves the chance for convergence. To deal with this convex optimization subject to linear constraints, we develop an algorithm using the alternating direction method of multipliers (ADMM) [18, 19] to solve iteratively, see Appendix for details. In each iteration, we derive an analytic updating formula to expedite convergence of ADMM, and convergence is guaranteed by a result in Section 3.1 of [18]. We compare our ADMM algorithm with the algorithm proposed in [17] in the "Results" section. Comparison of ADMM and CD+LS algorithms This section compares our ADMM algorithm with a coordinate descent with local search (CD+LS) algorithm of [17] for Eq. (4) with respect to computation efficiency through one simulated example. The CD+LS algorithm is implemented in R package zeroSum (version 1.0.4, https://github.com/rehbergT/zeroSum). In this example, we consider correlated predictors, that is, \(\tilde {\boldsymbol {x}}_{i}\)'s are drawn iid from N(0,V) and are independent of ε i 's that are sampled from N(0,1), and V is a p×p matrix whose ijth element is 0.5|i−j|. Moreover, the true β j 's are drawn iid from N(0,1) and then centered to have a sum zero, and λ is fixed as 1. Then, their rates for successful convergence and running times are recorded for the ADMM and the CD+LS algorithms over 20 simulations. Particularly, a precision or tolerance error of 10−10 is used for both algorithms. Successful convergence is reported if a solution from an algorithm satisfies the sum-to-zero constraint with a tolerance error of 10−8, and both the solution and its objective value are no further than 10−8 to the optimal solution and its corresponding objective value in terms of the L 2-distance. Here, the optimal solution is defined as the one, among the two solutions from the two algorithms, having the minimal objective value and satisfying the sum-to-zero constraint. Four different settings are compared, ranging from low- to high-dimensional situations. As showed in Table 1, the proposed ADMM algorithm outperforms the CD+LS algorithm with respect to both convergence guarantee and running time. Table 1 Comparison of ADMM and CD+LS algorithms Comparison of the proposed method and the Lasso This section examines effectiveness of the proposed method through simulated examples. Specifically, the proposed method is compared with the Lasso in terms of predictive accuracy and identification of the true model, where the Lasso is implemented in R package glmnet [21]. Simulated examples are generated with correlation structures as to be analyzed. These simulations are designed to examine various operating characteristics of the proposed method with respect to (p,n), noise level σ 2, and correlation structures among predictors in Eqs. (1) and (3). For tuning, λ is searched over 100 grid points that are uniformly spaced (in the log-scale) between 104 and 10−2. An independent testing dataset with 1000 randomly generated data points are used to find the optimal λ which minimizes the mean squared error (MSE). For performance metrics, an independent validation dataset with 1000 randomly generated data points are used to evaluate the performance of the fitted model in terms of mean squared error (MSE) and R 2. To assess robustness of the approaches under data normalization, we randomly add sample-wise shifts from N(0,1) to the validation dataset. Furthermore, we consider other two metrics for parameter estimation and the quality of identification of zero elements of true α in Eq. (1) and β in Eq. (3). For parameter estimation, we use the relative error (RE) for estimating the true regression coefficients \(\boldsymbol {\gamma }^{0}=\left (\gamma ^{0}_{1},\ldots,\gamma ^{0}_{p}\right)^{T}\), defined as $$\begin{array}{@{}rcl@{}} \text{RE} =\frac{||\tilde{\boldsymbol{\gamma}} - {\boldsymbol{\gamma}}^{0}||_{2}} {||\boldsymbol{\gamma}^{0}||_{2}}, \end{array} $$ where \(\tilde {\boldsymbol {\gamma }}=(\tilde {\gamma }_{1},\ldots,\tilde {\gamma }_{p})^{T}\) are estimated regression coefficients. This metric allows for accounting for different scales between α and β. For accuracy of identification, we use the false identification rate (FR), defined as $$\begin{array}{@{}rcl@{}} \text{FR} = 1-\frac{|\tilde\Gamma|\cap|\Gamma^{0}|}{|\Gamma^{0}|}, \end{array} $$ where \(\Gamma ^{0}=\{j | \gamma ^{0}_{j}\neq 0\}\) and \(\tilde \Gamma =\{j | \tilde \gamma ^{\prime }_{j}\neq 0\}\), with \(\tilde \gamma ^{\prime }\) being a truncated version of \(\tilde \gamma \) such that only the coefficients with the |Γ 0| largest absolute values are retained, and all others are zeroed out. Our simulated example concerns correlation structures among predictors. In Eqs. (1) and (3), logx i 's are iid from N(0,V) and are independent of ε i 's that are iid from N(0,σ 2), and V is a p×p matrix whose ijth element is 0.5|i−j|, α=(α 12,α 13,…,α p−1p )T. Three settings for α are considered: α 12=1, α 13=0.5, α 24=0.5, and α jk =0 otherwise, which does not satisfy the conditions defined in Proposition 2 with β 1=1.5, β 2=β 3=β 4=−0.5, and β j =0 for j≥5. α 12=1, α 13=0.5, α 24=−0.5, and α jk =0 otherwise, which satisfies the conditions defined in Proposition 2 but does not satisfy the conditions defined in Proposition 3 with β 1=1.5, β 2=−1.5, β 3=−0.5, β 4=0.5, and β j =0 for j≥5. α 12=1, α 13=0.5, α 14=0.5, and α jk =0 otherwise, which satisfies the conditions defined in Proposition 3 with β 1=2, β 2=−1, β 3=β 4=−0.5, and β j =0 for j≥5. The proposed method is compared with the Lasso in two models, corresponding to the gene-pair level design matrix z in Eq. (1) and \(\tilde {\boldsymbol {x}}\) in Eq. (3) (without the sum-to-zero constraint), referred to as Lasso 1 and Lasso 2, respectively. These three methods are examined on simple and difficult situations correspondingly with (p,n,σ)=(25,50,0.5),(100,25,0.2). Then the values of MSE, R 2, RE, and FR are reported. As suggested in Table 2, the proposed method performs better or the same compared with Lasso 1 and Lasso 2 in terms of accuracy and robustness across all the six situations. The improved performance is attributed to a reduced number of candidate parameters in Eq. (1) than Eq. (3), as well as to the sum-to-zero constraint introduced in Eq. (3). Interestingly, the false identification rates of the proposed method are almost zero in three low-noise setting of (p,n,σ)=(20,50,0.5) regardless if the conditions in Propositions 2 and 3 are met, and are small in the other three settings. In contrast, Lasso 1 has a higher relative error and false identification rate. While Lasso 2 has similar relative error and false identification rate as the proposed method, it has higher MSE and lower R 2 in all settings, due to its non-robustness to sample-wise scaling without the sum-to-zero constraint. Overall, all three methods perform better in the low-noise situation of (p,n,σ)=(20,50,0.5) than the high-noise situation of (p,n,σ)=(100,25,0.2). Across the three settings of α, the performance of the proposed method is rather stable. However, Lasso 1 performs much worse for settings in which α fails to satisfy the conditions in Proposition 3, corresponding to non-uniqueness of the representation of α. Most critically, when α satisfies the conditions in Proposition 3, the proposed method continues to outperform its counterpart in terms of the performance metrics. Overall, the proposed method achieves our objective. Table 2 Comparison of the proposed method and the Lasso in simulations An application to a real RNA-Seq dataset This section applies the proposed method to a prostate adenocarcinoma (PRAD) RNA-Seq dataset published as part of The Cancer Genome Atlas (TCGA) project [22]. Particularly, we identify gene pairs that are associated with pre-operative prostate-specific antigen (PSA), an important risk factor for prostate cancer. Towards this end, we download normalized gene expression data from the TCGA data portal (https://tcga-data.nci.nih.gov/docs/publications/tcga/). As described by TCGA, tissue samples from 333 PRAD patients were sequenced using the Illumina sequencing instruments. While raw sequencing reads were processed and analyzed using the SeqWare Pipeline 0.7.0 and MapspliceRSEM workflow 0.7 developed by the University of North Carolina, read alignment was performed using MapSplice [23] to the human reference genome, and gene expression levels were estimated using RSEM [24] with gene annotation GAF 2.1, and further normalized so that the upper quartile count is 1,000 in each sample. All these steps were performed by the TCGA consortium. In our experiment, the normalized RSEM gene expression estimates are used, excluding samples with missing pre-operative PSA values and genes for which the average normalized expression level is lower than 10. This prepossessing step yields p=15,382 genes and n=187 samples. Furthermore, we run Pearson correlation tests for each gene between log-transformed expression levels and log-transformed pre-operative PSA levels, and exclude genes with false discovery rate (FDR) values (calculated using the Benjamini-Hochberg [25] method based on the p-values from the Pearson correlation tests) larger than 0.01. Consequently, only 520 genes are retained in the analysis, on which we fit model Eq. (3) using the proposed ADMM algorithm. To visualize the selection result, we display the solution paths of the model fitting. As shown in Fig. 1, the first pair of genes entering the model are PTPRR and KRT15. While PTPRR is a member of the protein tyrosine phosphatase (PTP) family, which is known to be related with prostate cancer [26, 27], KRT15 is a member of the keratin gene family, which is known to be associated with breast cancer [28] and lung cancer [29]. Interestingly, we find no publication record on PubMed with keywords such as "KRT15 AND PSA" or "KRT15 AND prostate". By correlating log expression levels and log PSA levels in the 187 patients, we find that both PTPRR and KRT15 are significantly correlated with PSA levels (r=0.28 and p<10−4 for PTPRR, r=−0.33 and p<10−5 for KRT15). Not surprisingly, their log-ratio is even more strongly correlated with log PSA levels (r=0.41 and p<10−8), demonstrating the potential of using gene pairs as biomarkers. Solution paths of the model fitting with p=520 genes The other selected genes are HIST1H1E, LRAT, LCN2, KCNN4, RHOU, and EPHA5 in the order of them entering the model, among which LRAT [30], LCN2 [31], RHOU [32], and EPHA5 [33] are known to link to prostate cancer, and HIST1H1E and KCNN4 are connected to myeloma [34] and pancreatic cancer [35], respectively. To demonstrate the scalability of our proposed method, we employ the proposed ADMM algorithm to fit Eq. (3) with all the p=15,382 genes without pre-screening. In this situation, the first pair of genes entering the model are BCL8 and KRT15, where BCL8 is known to be associated with lymphoma [36]. The other selected genes are PTPRR, LRAT and LCN2, which are very similar to the foregoing results based on pre-screening. By comparison, fitting the corresponding model Eq. (1) using a standard Lasso algorithm, such as glmnet [21], would be practically prohibitive, which requires storing a design matrix of \(\frac {p(p-1)}{2}\times n \approx 22\times 10^{9}\) elements. To further demonstrate robustness of the proposed method with respect to data normalization, we randomly scale the gene expression levels, both along the gene dimension and the sample dimension, mimicking the gene length normalization and the library size or sequencing depth normalization, respectively. Numerically, we find that the solution path fitted using the randomly scaled data is always identical to that fitted using the original data. Model Eq. (3) has been proposed for compositional data [16] and recently for reference point insensitive data [17]. In this article, we explore Eq. (3) for the identification of gene pairs as biomarkers, enjoying robustness to sample-wise scaling normalization (which is a common practice for RNA-Seq data) and simplicity of validation and measurement by qPCR techniques. Through Propositions 1–4, we establish the relationship between models Eq. (1) and Eq. (3). Additionally, we develop an efficient ADMM algorithm for solving model Eq. (3), which is guaranteed to converge and is shown to be highly competitive in terms of computational efficiency. One interesting yet important issue of model Eq. (3) is determination of the value of α. One proposal is choosing the α to minimize the L 0-norm instead of L 1-norm. However, in such a situation, it remains unclear what kind of conditions as those in Propositions 2, 3 and 4 may be. Furthermore, minimization of the L 0-norm in α continues to be challenging by itself due to non-convexity and discontinuity of the L 0-function. Therefore, our approach based on the L 1-norm gives rise to convex minimization, which is easier to manage. One important aspect of model Eq. (3) is that it enables to identify gene pairs in an unbiased manner without any prior knowledge of the known biology of the disease. However, in some situations, information regarding the disease of interest is available from previous studies. Then the prior knowledge needs to be integrated for gene pair identification so that more probable subset of genes or pathways should have a higher chance of being selected. This can be accomplished through weighted regularization with weights \(\{\lambda _{k}\}_{k=1}^{p}\), with a large weight corresponding to a small chance of being selected. Moreover, in some other situations, gene pairs are constrained in that gene pairs are formed only between relevant genes from the same pathway. This can be achieved by replacing the Lasso penalty by either a (sparse) group Lasso penalty [37] and/or the simple equality constraint by a set of constraints, each corresponding to a given pathway of interest. Finally, some non-convex penalties such as the SCAD penalty [14] and the TLP penalty [15] can be used as opposed to the Lasso penalty to achieve a higher accuracy of selection at an expense of computation. For a large-scale problem, an ADMM algorithm may have a linear convergence rate. To expedite convergence, an inexact ADMM algorithm may be useful in our setting, which has been shown recently to lead to a substantial improvement over the standard ADMM algorithm [19]. Furthermore, parallelization of our ADMM algorithm may achieve further scalability, which is one advantage of ADMM algorithms over many other optimization techniques [18]. One extension of Eq. (1) is generalized linear models such as logistic regression or other predictive models like support vector machine. In such a situation, the proposed method for Eq. (1) is directly applicable to gene pair identification with some modifications. Further investigation is necessary. In conclusion, the experimental results demonstrate that gene pairs can be used as robust biomarkers which can tolerate sample-wise scaling normalization. Furthermore, using L 1 penalized regression with equality constraints, the model fitting can be formulated as a convex optimization problem which can be solved efficiently using the proposed ADMM algorithm. This approach has the potential to discover novel and reliable biomarkers for biological or clinical studies. Proofs of propositions Proof of Proposition 1: From Eq. (1), we have $$\begin{array}{@{}rcl@{}} f(\boldsymbol{z})&=&\sum_{j=1}^{p}\sum_{k=j+1}^{p}\alpha_{jk}(\log x_{j} - \log x_{k})\\ &=&\sum_{j=1}^{p}\log x_{j}\left(\sum_{k=j+1}^{p}\alpha_{jk}-\sum_{k=1}^{j-1}\alpha_{kj}\right)\\ &=&\sum_{j=1}^{p}\log x_{j}\sum_{k\neq j}\alpha_{jk}\\ &=&\sum_{j=1}^{p}\beta_{j}\log x_{j}. \end{array} $$ $$\begin{array}{@{}rcl@{}} \sum_{j=1}^{p}\beta_{j} &=& \sum_{j=1}^{p}\sum_{k\neq j}\alpha_{jk}\\ &=&\sum_{j=1}^{p} \sum_{k= j+1}^{p} \alpha_{jk}-\sum_{j=1}^{p} \sum_{k=1}^{j} \alpha_{kj}\\ &=&\sum_{j=1}^{p} \sum_{k= j+1}^{p} \alpha_{jk}-\sum_{k=1}^{p} \sum_{j=k+1}^{p} \alpha_{kj}\\ &=&0. \end{array} $$ This completes the proof. Proof of Proposition 2: We show (A) ⇒(B), (B) ⇒(C) and (C) ⇒(A), respectively. (A) ⇒(B): We prove by contradiction. Without loss of generality, assume that α 12≥α 23>0. Then, we construct α ′ such that \(\alpha ^{\prime }_{12}=\alpha _{12}-\alpha _{23}\), \(\alpha ^{\prime }_{23}=0\), \(\alpha ^{\prime }_{13}=-\alpha _{13}+\alpha _{23}>0\) and \(\alpha ^{\prime }_{ij}=\alpha _{ij}\) otherwise. Easily, α ′ satisfies Eq. (2) and ||α ′||1−||α||1=−2α 13<0, which contradicts (A). (B) ⇒(C): By (B), α ij and α jk always have opposite signs. This, together with the definition that α ji =−α ij , α ji and α jk always have the same sign, where 0 can be regarded as an arbitrary sign. Therefore, from Eq. (2), we have $$\begin{array}{@{}rcl@{}} \frac12||\boldsymbol{\beta}||_{1}=\frac12\sum_{j=1}^{p}|\beta_{j}|=\frac12\sum_{j=1}^{p}\left|\sum_{k\neq j}\alpha_{jk}\right|\\ =\frac12\sum_{j=1}^{p}\sum_{k\neq j}|\alpha_{jk}|=||\boldsymbol{\alpha}||_{1}. \end{array} $$ (C) ⇒(A): For any α ′ satisfying Eq. (2), we have $$\begin{array}{@{}rcl@{}} ||\boldsymbol{\alpha}||_{1}=\frac12||\boldsymbol{\beta}||_{1}=\frac12\sum_{j=1}^{p}|\beta_{j}|=\frac12\sum_{j=1}^{p}\left|\sum_{k\neq j}\alpha^{\prime}_{jk}\right|\\ \leq\frac12\sum_{j=1}^{p}\sum_{k\neq j}|\alpha^{\prime}_{jk}|=||\boldsymbol{\alpha}^{\prime}||_{1}. \end{array} $$ Proof of Proposition 3: We show (D) ⇒(E), (E) ⇒(F) and (F) ⇒(D), respectively. (D) ⇒(E): As in the proof of Proposition 2, we assume, without loss of generality, that α 24≥α 13>0. Then, we construct α ′ such that \(\alpha ^{\prime }_{13}=0\), \(\alpha ^{\prime }_{14}=\alpha _{14}+\alpha _{13}\), \(\alpha ^{\prime }_{23}=\alpha _{23}+\alpha _{13}\), \(\alpha ^{\prime }_{24}=\alpha _{24}-\alpha _{13}\) and \(\alpha ^{\prime }_{ij}=\alpha _{ij}\) otherwise. Easily, α ′ also satisfies Eq. (2), and ||α ′||1≤||α||1, which contradicts (D). (E) ⇒(F): From (E), there exists a j such that α ik =0 for all i≠j and k≠j. From Eq. (2) and (B) in Proposition 2, \(\beta _{i}=\sum _{k\neq i}\alpha _{ik}=\alpha _{ij}, \text {for all}i\neq j\), and \( |\beta _{j}|=\left |\sum _{k\neq j}\alpha _{jk}\right |=\sum _{k\neq j}|\alpha _{jk}|=\sum _{k\neq j}|\beta _{k}|. \) (F) ⇒(D): Suppose that α ′ satisfies Eq. (2) and the conditions in Proposition 2. From (B), we know for all k≠j, \(\alpha ^{\prime }_{jk}\) have the same sign. From Eq. (2), (C) and (F), we have $${} \frac12||\boldsymbol{\beta}||_{1}=|\beta_{j}|=\left|\sum_{k\neq j}\alpha^{\prime}_{jk}\right|=\sum_{k\neq j}|\alpha^{\prime}_{jk}|\leq||\boldsymbol{\alpha}^{\prime}||_{1}=\frac12||\boldsymbol{\beta}||_{1}$$ Therefore, we must have \(\sum _{k\neq j}|\alpha ^{\prime }_{jk}|=||\boldsymbol {\alpha }^{\prime }||_{1}\). That is, \(\alpha ^{\prime }_{ik}=0\) for all i≠j, k≠j. Furthermore, for any i≠j, we have \(\beta _{i}=\sum _{k\neq i}\alpha ^{\prime }_{ik}=\alpha ^{\prime }_{ij}.\) Therefore, α ′=α, implying the uniqueness of α. This completes the proof. Proof of Proposition 4: (G): By Eq. (2), β j ≠0 only if at least for some k≠j, α jk ≠0. Based on α and β, we can construct an undirected graph G=(V,E) with p vertices such that there is an edge between vertices i and j if and only if α ij ≠0. We know that β j can not be nonzero unless vertex V j has a degree of at least 1. Since the total number of edges is A, we know that \(B\leq \sum _{j=1}^{p}I(degree(V_{j})>0)\leq \sum _{j=1}^{p} degree(V_{j})=2A\). (H): Suppose α satisfies the conditions defined in Proposition 2. If α ij ≠0, let α ij be the weight associated with edge connecting V i and V j . By condition (B), for any cycle in the graph formed by sequence i 1,i 2,…,i k ,i k+1=i 1, we know that weights associated with adjacent edges (i.e., \(\alpha _{i_{j-1}i_{j}}\) and \(\alpha _{i_{j}i_{j+1}}\)) always have opposite signs. Therefore, the number of edges in the cycle has to be an even number, which means that the graph has to be a bipartite graph. It is then easy to see that \(A\leq \left (\frac {B}{2}\right)^{2}\) for such graphs. That is, \(2\sqrt {A}\leq B\). (I): Suppose α≠0 satisfies the conditions defined in Proposition 3. By condition (F), B=A+1. This completes the proof. Proof of Proposition 5: It suffices to show that \(\hat {\alpha }\) satisfies (C) of Proposition 2. Note that \(\tilde {\boldsymbol {\beta }}\) in Algorithm 1 satisfies the sum-to-zero constraint at each step of iteration before termination. In the beginning, \(\|\hat {\boldsymbol {\alpha }}\|_{1}=0\) and \(\|\tilde {\boldsymbol {\beta }}\|_{1}=\|\hat {\boldsymbol {\beta }}\|_{1}\). After each iteration, \(\|\hat {\boldsymbol {\alpha }}\|_{1}\) is increased by \(\hat \alpha _{k_{1} k_{2}}=\min (|\tilde {\beta }_{k_{1}}|, |\tilde {\beta }_{k_{2}}|)\), and \(\|\tilde {\boldsymbol {\beta }}\|_{1}\) is decreased by \(2\hat \alpha _{k_{1} k_{2}}\). In the end, \(\|\tilde {\boldsymbol {\beta }}\|_{1}=0\), and therefore \(\|\hat {\boldsymbol {\alpha }}\|_{1} =\frac {1}{2} \|\hat {\boldsymbol {\beta }}\|_{1}\). This completes the proof. ADMM algorithm for solving Eq. (4) We adopt the notation in [18] and reformulate Eq. (4) as $$ \begin{aligned} &\text{min}\ (1/2)\|\boldsymbol{Ax}-\boldsymbol{b}\|^{2}_{2}+\lambda\|\boldsymbol{z}\|_{1} \\ &\text{subject to}\ \boldsymbol{Cx}=d, \boldsymbol{x}-\boldsymbol{z}=\boldsymbol{0}. \end{aligned} $$ where x are the parameters of interest. If C=1T and d=0, we will have all coefficients sum up to 0. When there is an intercept in the model, we can add a scalar 0 as the first element in C, meaning that we do not have constraint on the intercept. Similarly, as a convention we also do not penalize the intercept. Denoting \(\boldsymbol {B}=\left [\begin {array}{ll}\boldsymbol {C}\\\boldsymbol {I} \end {array}\right ]\), \(\boldsymbol {D}=\left [\begin {array}{cc}\boldsymbol {0}\\\boldsymbol {-I}\end {array}\right ]\), and \(\boldsymbol {d}=\left [\begin {array}{ll}d\\\boldsymbol {0}\end {array}\right ]\), the two equality constraints can be simplified as B x+D z=d. To use the ADMM algorithm [18], we form the augmented Lagrangian $$\begin{array}{@{}rcl@{}} L_{\rho}(\boldsymbol{x},\boldsymbol{z},\boldsymbol{y})& = & (1/2)\|\boldsymbol{Ax}-\boldsymbol{b}\|^{2}_{2}+\lambda\|\boldsymbol{z}\|_{1} \\ & & +(\rho/2)\|\boldsymbol{B}\boldsymbol{x}+\boldsymbol{D}\boldsymbol{z}-\boldsymbol{d}+\boldsymbol{u}\|^{2}_{2}-(\rho/2)\|\boldsymbol{u}\|^{2}_{2}, \end{array} $$ with \(\boldsymbol {u}^{k}=(1/\rho)\boldsymbol {y}^{k}=\left [\begin {array}{ll}u_{1}\\ \boldsymbol {u_{2}}^{p}\end {array}\right ]\) where u 1 is a scalar and \(\boldsymbol {u_{2}} \in \mathbb {R}^{p\times 1}\). Let \(\boldsymbol {E}=\left [\begin {array}{cc}\boldsymbol {A}\\ \sqrt {\rho } \boldsymbol {C}\end {array}\right ]\), the ADMM algorithm consists of the following iterations $${} {{\begin{aligned} \boldsymbol{x}^{k+1}&:=(\boldsymbol{E}^{T}\boldsymbol{E}+\rho \boldsymbol{I})^{-1}\left(\boldsymbol{A}^{T}\boldsymbol{b}+\rho\left(\boldsymbol{z}^{k}-\boldsymbol{C}^{T}u_{1}^{k}+\boldsymbol{C}^{T}d-\boldsymbol{u_{2}}^{k}\right)\right)\\ \boldsymbol{z}^{k+1}&:=S_{\lambda/\rho}\left(\boldsymbol{x}^{k+1}+\boldsymbol{u_{2}}^{k}\right)\\ \boldsymbol{u}^{k+1}&:=\boldsymbol{u}^{k}+\left(\boldsymbol{B}x^{k+1}+\boldsymbol{D}z^{k+1}-\boldsymbol{d}\right). \end{aligned}}} $$ The x update can be accelerated by caching an initial factorization. Suppose the dimension of E is m×p. If m<p, we cache the factorization of I+ρ E E T (with dimension m×m) and use the matrix inversion lemma $$\left(\rho \boldsymbol{I}+\boldsymbol{E}^{T}\boldsymbol{E}\right)^{-1}=\boldsymbol{I}/\rho-\boldsymbol{E}^{T}\left(\boldsymbol{I}+1/\rho \boldsymbol{E}\boldsymbol{E}^{T}\right)^{-1}\boldsymbol{E}/\rho^{2}$$ to update x. Otherwise, we cache the factorization of ρ I+E T E (with dimension p×p) and use back- and forward- solve to update x directly. The iteration stops when the primal and dual residuals are smaller than their corresponding tolerances, $$\|\boldsymbol{r}^{k+1}\|_{2}\leq \epsilon^{pri}\quad \text{and}\ \quad \|\boldsymbol{s}^{k+1}\|_{2} \leq \epsilon^{dual},$$ $${} {{\begin{aligned} \boldsymbol{r}^{k+1}&=\boldsymbol{B}\boldsymbol{x}^{k+1}+\boldsymbol{D}\boldsymbol{z}^{k+1}-\boldsymbol{d},\\ \boldsymbol{s}^{k+1}&=\rho\boldsymbol{B}^{T}\boldsymbol{D} \left(\boldsymbol{z}^{k+1}-\boldsymbol{z}^{k}\right),\\ \epsilon^{pri}&=\sqrt{p+1} \epsilon^{abs}+\epsilon^{rel} \text{max}\ \left\{ \| \boldsymbol{B} \boldsymbol{x}^{k+1}\|_{2},\|-\boldsymbol{z}^{k+1}\|_{2}, \|\boldsymbol{d}\|_{2} \right\},\\ \epsilon^{dual}&=\sqrt{p} \epsilon^{abs}+\epsilon^{rel}\|\rho \boldsymbol{B}^{T}\boldsymbol{u}^{k+1}\|_{2}. \end{aligned}}} $$ Usually, the relative stopping criteria is chosen to be ε rel=10−4, and the choice of absolute stopping criteria ε abs depends on the scale of the variable values. See Boyd et al. [18] for details. To compute a solution path for a decreasing sequence of λ values, we adopt the approach in Friedman et al. [21] and use warm starts for each λ value. The sequence of λ values are either provided by the user, or we begin with λ max =∥A T b∥ ∞ for which all the coefficients are equal to 0. We set λ min =ε λ λ max , where ε λ is a small value, such as 0.01, and generate a decreasing sequence of 100 λ values from λ max to λ min on the log-scale. ADMM: Alternating direction method of multipliers CD+LS: Coordinate decent with local search DNA: Deoxyribonucleic acid FDR: False discovery rate False identification rate iid: Independent identically distributed Lasso: Least absolute shrinkage and selection operator mRNA: MSE: Mean squared error PRAD: prostate adenocarcinoma PSA: PTP: Protein tyrosine phosphatase qPCR: Quantitative polymerase chain reaction Relative error RNA: Ribonucleic acid RNA-Seq: RSEM: RNA-Seq by expectation-maximization SCAD: Smoothly clipped absolute deviation TCGA: The cancer genome atlas TLP: Truncated L 1 penalty TSP: Top-scoring pair Schena M, Shalon D, Davis RW, Brown PO. Quantitative monitoring of gene expression patterns with a complementary dna microarray. Science. 1995; 270(5235):467. Quackenbush J. Microarray analysis and tumor classification. N Engl J Med. 2006; 354(23):2463–72. Dillies MA, Rau A, Aubert J, Hennequet-Antier C, Jeanmougin M, Servant N, Keime C, Marot G, Castel D, Estelle J, et al. A comprehensive evaluation of normalization methods for illumina high-throughput rna sequencing data analysis. Brief Bioinform. 2013; 14(6):671–83. Patil P, Bachant-Winner PO, Haibe-Kains B, Leek JT. Test set bias affects reproducibility of gene signatures. Bioinformatics. 2015; 31(14):2318–23. Geman D, d'Avignon C, Naiman DQ, Winslow RL. Classifying gene expression profiles from pairwise mrna comparisons. Stat Appl Genet Mol Biol. 2004; 3(1):1–19. Leek JT. The tspair package for finding top scoring pair classifiers in r. Bioinformatics. 2009; 25(9):1203–04. Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative pcr and the 2- δ δct method. methods. 2001; 25(4):402–8. Arukwe A. Toxicological housekeeping genes: do they really keep the house?. Environ Sci Technol. 2006; 40(24):7944–9. Ma XJ, Wang Z, Ryan PD, Isakoff SJ, Barmettler A, Fuller A, Muir B, Mohapatra G, Salunga R, Tuggle JT, et al. A two-gene expression ratio predicts clinical outcome in breast cancer patients treated with tamoxifen. Cancer cell. 2004; 5(6):607–16. Raponi M, Lancet JE, Fan H, Dossey L, Lee G, Gojo I, Feldman EJ, Gotlib J, Morris LE, Greenberg PL, et al. A 2-gene classifier for predicting response to the farnesyltransferase inhibitor tipifarnib in acute myeloid leukemia. Blood. 2008; 111(5):2589–96. Price ND, Trent J, El-Naggar AK, Cogdell D, Taylor E, Hunt KK, Pollock RE, Hood L, Shmulevich I, Zhang W. Highly accurate two-gene classifier for differentiating gastrointestinal stromal tumors and leiomyosarcomas. Proc Natl Acad Sci. 2007; 104(9):3414–19. Tan AC, Naiman DQ, Xu L, Winslow RL, Geman D. Simple decision rules for classifying human cancers from gene expression profiles. Bioinformatics. 2005; 21(20):3896–904. Tibshirani R. Regression shrinkage and selection via the lasso. R Stat Soc Ser B Methodol J. 1996; 58(1):267–88. Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc. 2001; 96(456):1348–60. Shen X, Pan W, Zhu Y. Likelihood-based selection and sharp parameter estimation. J Am Stat Assoc. 2012; 107(497):223–32. Lin W, Shi P, Feng R, Li H, et al. Variable selection in regression with compositional covariates. Biometrika. 2014; 101(4):785–97. Altenbuchinger M, Rehberg T, Zacharias H, Stämmler F, Dettmer K, Weber D, Hiergeist A, Gessner A, Holler E, Oefner PJ, et al. Reference point insensitive molecular data analysis. Bioinformatics. 2016; 598:1–122. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends®; Mach Learn. 2011; 3(1):1–122. Wang H, Banerjee A. Bregman alternating direction method of multipliers. In: Advances in Neural Information Processing Systems. Curan Associates, Inc.: 2014. p. 2816–24. Smyth G. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3(1):1–25. Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010; 33(1):1. The Cancer Genome Atlas Research Network: The molecular taxonomy of primary prostate cancer. Cell. 2015; 163(4):1011–25. Wang K, Singh D, Zeng Z, Coleman SJ, Huang Y, Savich GL, He X, Mieczkowski P, Grimm SA, Perou CM, et al. Mapsplice: accurate mapping of rna-seq reads for splice junction discovery. Nucleic Acids Res. 2010; 38(18):178–8. Li B, Dewey CN. Rsem: accurate transcript quantification from rna-seq data with or without a reference genome. BMC bioinformatics. 2011; 12(1):1. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995; 57(1):289–300. Rand KA, Rohland N, Tandon A, Stram A, Sheng X, Do R, Pasaniuc B, Allen A, Quinque D, Mallick S, et al. Whole-exome sequencing of over 4100 men of african ancestry and prostate cancer risk. Hum Mol Genet. 2016; 25(2):371–81. Munkley J, Lafferty NP, Kalna G, Robson CN, Leung HY, Rajan P, Elliott DJ. Androgen-regulation of the protein tyrosine phosphatase ptprr activates erk1/2 signalling in prostate cancer cells. BMC cancer. 2015; 15(1):9. Gelfand R, Vernet D, Bruhn KW, Sarkissyan S, Heber D, Vadgama JV, Gonzalez-Cadavid NF. Long-term exposure of mcf-7 breast cancer cells to ethanol stimulates oncogenic features. Int J Oncol. 2017; 50(1):49–65. Boyero L, Sánchez-Palencia A, Miranda-León MT, Hernández-Escobar F, Gómez-Capilla JA, Fárez-Vidal ME. Survival, classifications, and desmosomal plaque genes in non-small cell lung cancer. Int J Med Sci. 2013; 10(9):1166. Guo X, Knudsen BS, Peehl DM, Ruiz A, Bok D, Rando RR, Rhim JS, Nanus DM, Gudas LJ. Retinol metabolism and lecithin: retinol acyltransferase levels are reduced in cultured human prostate cancer cells and tissue specimens. Cancer Res. 2002; 62(6):1654–61. Sunil VR, Patel KJ, Nilsen-Hamilton M, Heck DE, Laskin JD, Laskin DL. Acute endotoxemia is associated with upregulation of lipocalin 24p3/lcn2 in lung and liver. Exp Mol Pathol. 2007; 83(2):177–87. Alinezhad S, Väänänen RM, Tallgrén T, Perez IM, Jambor I, Aronen H, Kähkönen E, Ettala O, Syvänen K, Nees M, et al. Stratification of aggressive prostate cancer from indolent disease-prospective controlled trial utilizing expression of 11 genes in apparently benign tissue. Urologic Oncol. 2016; 34(6):255–15. Elsevier. Li S, Zhu Y, Ma C, Qiu Z, Zhang X, Kang Z, Wu Z, Wang H, Xu X, Zhang H, et al. Downregulation of epha5 by promoter methylation in human prostate cancer. BMC cancer; 15(1):18. Walker BA, Boyle EM, Wardell CP, Murison A, Begum DB, Dahir NM, Proszek PZ, Johnson DC, Kaiser MF, Melchor L, et al. Mutational spectrum, copy number changes, and outcome: results of a sequencing study of patients with newly diagnosed myeloma. J Clin Oncol. 2015; 33(33):3911–20. Bonito B, Sauter DP, Schwab A, Djamgoz MA, Novak I. Kca3. 1 (ik) modulates pancreatic cancer cell migration, invasion and proliferation: anomalous effects on tram-34. Pflügers Archiv-European J Physiol. 2016; 468(11-12):1865–75. Dyomin VG, Rao PH, Dalla-Favera R, Chaganti R. Bcl8, a novel gene involved in translocations affecting band 15q11–13 in diffuse large-cell lymphoma. Proc. Natl Acad Sci. 1997; 94(11):5728–32. Simon N, Friedman J, Hastie T, Tibshirani R. A sparse-group lasso. J Comput Graphical Stat. 2013; 22(2):231–45. The authors thank the editors, the associate editor and anonymous referees for helpful comments and suggestions. HJ's research was supported in part by a startup grant from the University of Michigan and the National Cancer Institute grants 4P30CA046592 and 5P50CA186786. LL's research was supported in part by the Summer Internship Funds of Certificate in Public Health Genetics (CPHG) Program at the University of Michigan. The funding body(s) played no role in the design or conclusions of the study. The datasets and R programs for reproducing the results in this paper are available at http://www-personal.umich.edu/~jianghui/gene-pair/. The Blake School, Minneapolis, 55403, MN, USA Rex Shen Department of Biostatistics, University of Michigan, Ann Arbor, 48109, MI, USA Lan Luo & Hui Jiang Search for Rex Shen in: Search for Lan Luo in: Search for Hui Jiang in: RS conducted the experiments and involved in the methodology development. LL developed and implemented the algorithm. HJ conceived the study, developed the algorithm and conducted the experiments. All authors drafted, read and approved the final manuscript. Correspondence to Hui Jiang. Shen, R., Luo, L. & Jiang, H. Identification of gene pairs through penalized regression subject to constraints. BMC Bioinformatics 18, 466 (2017) doi:10.1186/s12859-017-1872-9 Gene pair Penalized regression ADMM
CommonCrawl
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Missed the LibreFest? Watch the recordings here on Youtube! Mon, 21 Dec 2020 05:59:18 GMT 4.3: Directions and Magnitudes [ "article:topic", "authortag:waldron", "authorname:waldron", "showtoc:no" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FBookshelves%2FLinear_Algebra%2FMap%253A_Linear_Algebra_(Waldron_Cherney_and_Denton)%2F04%253A_Vectors_in_Space_n-Vectors%2F4.03%253A_Directions_and_Magnitudes Book: Linear Algebra (Waldron, Cherney, and Denton) 4: Vectors in Space, n-Vectors Page ID Contributed by David Cherney, Tom Denton, & Andrew Waldron Professor (Mathematics) at University of California, Davis \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Consider the \(\textit{Euclidean length}\) of a vector: \[ \|v\| := \sqrt{(v^{1})^{2} + (v^{2})^{2}+\cdots(v^{n})^{2}}\ =\ \sqrt{ \sum_{i=1}^{n} (v^{i})^{2} }\: .\] Using the Law of Cosines, we can then figure out the angle between two vectors. Given two vectors \(v\) and \(u\) that \(\textit{span}\) a plane in \(\mathbb{R}^{n}\), we can then connect the ends of \(v\) and \(u\) with the vector \(v-u\). Then the Law of Cosines states that: \[ \|v-u\|^2 = \|u\|^2 + \|v\|^2 - 2\|u\|\, \|v\| \cos \theta \] Then isolate \(\cos \theta\): \begin{eqnarray*} \|v-u\|^{2} - \|u\|^{2} - \|v\|^{2} &=& (v^{1}-u^{1})^{2} + \cdots + (v^{n}-u^{n})^{2} \\ & & \quad - \big((u^{1})^{2} + \cdots + (u^{n})^{2}\big) \\ & & \quad - \big((v^{1})^{2} + \cdots + (v^{n})^{2}\big) \\ & = & -2 u^{1}v^{1} - \cdots - 2u^{n}v^{n} \end{eqnarray*} Thus, \[\|u\|\, \|v\| \cos \theta = u^{1}v^{1} + \cdots + u^{n}v^{n}\, .\] Note that in the above discussion, we have assumed (correctly) that Euclidean lengths in \(\mathbb{R}^{n}\) give the usual notion of lengths of vectors for any plane in \(\mathbb{R}^{n}\). This now motivates the definition of the dot product. The \(\textit{dot product}\) of two vectors \(u=\begin{pmatrix}u^{1} \\ \vdots \\ u^{n}\end{pmatrix}\) and \(v=\begin{pmatrix}v^{1} \\ \vdots \\ v^{n}\end{pmatrix}\) is \[u\cdot v := u^{1}v^{1} + \cdots + u^{n}v^{n}\, .\] The \(\textit{length}\) or \(\textit{norm}\) or \(\textit{magnitude}\) of a vector is \[\|v\| := \sqrt{v\cdot v }\, .\] The \(\textit{angle}\) \(\theta\) between two vectors is determined by the formula $$u\cdot v = \|u\|\|v\|\cos \theta\, .$$ When the dot product between two vectors vanishes, we say that they are perpendicular or \(\textit{orthogonal}\). Notice that the zero vector is orthogonal to every vector. The dot product has some important properties: The dot product is \(\textit{symmetric}\), so $$u\cdot v = v\cdot u\, ,$$ \(\textit{Distributive}\) so $$u\cdot (v+w) = u\cdot v + u\cdot w\, ,$$ \(\textit{Bilinear}\), which is to say, linear in both \(u\) and \(v\). Thus $$ u\cdot (cv+dw) = c \, u\cdot v +d \, u\cdot w\, ,$$ and $$(cu+dw)\cdot v = c\, u\cdot v + d\, w\cdot v\, .$$ \(\textit{Positive Definite}\): $$u\cdot u \geq 0\, ,$$ and \(u\cdot u = 0\) only when \(u\) itself is the \(0\)-vector. There are, in fact, many different useful ways to define lengths of vectors. Notice in the definition above that we first defined the dot product, and then defined everything else in terms of the dot product. So if we change our idea of the dot product, we change our notion of length and angle as well. The dot product determines the \(\textit{Euclidean length and angle}\) between two vectors. Other definitions of length and angle arise from \(\textit{inner products}\), which have all of the properties listed above (except that in some contexts the positive definite requirement is relaxed). Instead of writing $\cdot$ for other inner products, we usually write \(\langle u,v \rangle\) to avoid confusion. As a result, the "squared-length'' of a vector with coordinates \(x, y, z\) and \(t\) is \(\|v\|^{2} = x^{2} + y^{2} + z^{2} - t^{2}\). Notice that it is possible for \(\|v\|^{2}\leq 0\) even with non-vanishing \(v\)! Theorem Cauchy-Schwarz Inequality For non-zero vectors \(u\) and \(v\) with an inner-product \(\langle\:\ ,\:\, \rangle\), \[ \frac{|\langle u,v \rangle|}{\|u\|\, \|v\|} \leq 1 \] You should carefully check for yourself exactly which properties of an inner product were used to write down the above inequality! Next, a tiny calculus computation shows that any quadratic \(a\alpha^{2} + 2b \alpha + c\) takes its minimal value \(c-\frac{b^{2}}{a}\) when \(\alpha=-\frac{b}{a}\). Applying this to the above quadratic gives \[0\leq \langle u,u\rangle -\frac{\langle u,v\rangle^{2}}{\langle v,v\rangle}\, .\] Now it is easy to rearrange this inequality to reach the Cauchy--Schwarz one above. Theorem Triangle Inequality Given vectors \(u\) and \(v\), we have: \[ \|u+v\| \leq \|u\| + \|v\| \] \|u+v\|^{2} & = & (u+v)\cdot (u+v) \\ & = & u\cdot u + 2 u\cdot v + v\cdot v \\ & = & \|u\|^{2} + \|v\|^{2} + 2\, \|u\|\, \|v\| \cos \theta \\ & = & \left(\|u\| + \|v\|\right)^{2} + 2 \, \|u\| \, \|v\| (\cos \theta -1) \\ & \leq & \left(\|u\| + \|v\|\right)^{2} \\ Then the square of the left-hand side of the triangle inequality is \(\leq\) the right-hand side, and both sides are positive, so the result is true. The triangle inequality is also "self-evident'' examining a sketch of \(u\), \(v\) and \(u+v\): Notice also that \(a\cdot b=1.4+2.3+3.2+4.1=20< \sqrt{30}.\sqrt{30}=30=\|a\|\, \|b\|\) in accordance with the Cauchy--Schwarz inequality. David Cherney, Tom Denton, and Andrew Waldron (UC Davis) 4.2: Hyperplanes 4.4: Vectors, Lists and Functions- \(\mathbb{R}^{S}\) Section or Page David Cherney, Tom Denton, & Andrew Waldron Show Page TOC authortag:waldron © Copyright 2021 Mathematics LibreTexts The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Legal. Have questions or comments? For more information contact us at [email protected] or check out our status page at https://status.libretexts.org.
CommonCrawl
How to reshape matrix into row-major order for MKL DSS? I would like to use MKL to solve a sparse linear system. I chose the DSS (Direct Sparse Solver) interface, which implements the following steps: //(1).define the non-zero structure of the matrix dss_define_structure(handle, sym, rowIndex, nRows, nCols, columns, nNonZeros); //(2).reorder the matrix dss_reorder(handle, opt, 0); //(3).factor the matrix dss_factor_real(handle, type, values); //(4).get the solution vector dss_solve_real(handle, opt, rhs, nRhs, solValues); //(5).deallocate solver storage dss_delete(handle, opt); According to my test, the DSS uses a column-major ordering. That means //column = 3 { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } is equivalent to {{1, 6, 11}, {2, 7, 12}, {4, 9, 14} {5, 10, 15}} For instance, there is a sparase linear system $\mathbf A_{5\times 5}\mathbf X_{5\times 3}=\mathbf B_{5\times 3}$ where $\mathbf A$ is symmetric sparse array //A stored with CSR3 format rowIndex = { 0, 5, 6, 7, 8, 9 }; columns = { 0, 1, 2, 3, 4, 1, 2, 3, 4 }; values = { 9, 1.5, 6, .75, 3, 0.5, 12, .625, 16 }; //B rhs[5*3] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} //X solValues[15] By calling the above DSS interface, the solValues is {-326.333, 983, 163.417, 398, 61.5, -844.667, 2548, 423, 1028, 159, -1363, 4113, 682.583, 1658, 256.5} $$ \mathbf X_{5\times 3}=\left( \begin{array}{ccc} -326.333 & -844.667 & -1363 \\ 983 & 2548 & 4113 \\ 163.417 & 423 & 682.583 \\ 398 & 1028 & 1658 \\ 61.5 & 159 & 256.5 \\ \end{array} \right) $$ In my application, the matrix is row-major. How to deal with this problem? linear-solver sparse-matrix intel-mkl xyzxyz $\begingroup$ The example you show is a dense matrix. The DSS (obviously) deals with the sparse matrix format (shown here: software.intel.com/en-us/node/…). How are you storing the sparse matrix in your software? $\endgroup$ – Bill Greene Mar 21 '17 at 12:19 $\begingroup$ @BillGreene In my application, I also stores the sparse matrix with CSR3 format via zero-based indexing. $\endgroup$ – xyz Mar 22 '17 at 6:45 $\begingroup$ @BillGreene In addition, for a linear system $\mathbf A_{n \times n}\mathbf X_{n \times c}=\mathbf B_{n \times c}$ , the storing of $\mathbf X$ and $\mathbf B $ are in row-major ordering $\endgroup$ – xyz Mar 22 '17 at 6:51 $\begingroup$ According to the DSS documentation page I referenced, it requires CSR format. So you should be all set. $\endgroup$ – Bill Greene Mar 22 '17 at 11:36 Browse other questions tagged linear-solver sparse-matrix intel-mkl or ask your own question. How to tell which (sequential or parallel) version of Intel MKL is linked? Solving for null space of a matrix with mkl LAPACK Sparse matrix ordering in Python Solving Linear Systems in Julia How to efficiently implement Dirichlet boundary conditions in global sparse finite element stiffnes matrices What are some ideas to preprocess / precondition the following linear system? Sparse Matrix Matrix multiplication using Intel MKL
CommonCrawl
Noether's Theorem – A Quick Explanation (2019) (quantum-friend-theory.tumblr.com) 69 points by panic 6 days ago | hide | past | web | favorite | 30 comments mikorym 6 days ago I am not well versed in the Physics counterpart of Noether's work, but the mathematics side is quick to explain: the isomorphism theorems in group theory (originally, in ring theory) were one of the main impacts of her work. A quick skim of Wikipedia tells me that the isomorphism theorems were later than the theorem in physics [1] [2] by about 5 years. Looking at it quickly, these are not the same thing, and although in spirit similar, I am not sure what the exact historical progression was. I wonder if standard abstract algebra modules could teach this history in a way that tells more about the story behind the isomorphism theorems. In my abstract algebra class, Noether was not really mentioned, and usually the focus is (for example) on the progress from Greek geometry (e.g. squaring the circle) to modern algebra. I don't think anyone is at fault for this, but I would personally like to have a more accessible introduction to Noether's legacy via pure mathematics. [1] https://en.wikipedia.org/wiki/Isomorphism_theorems [2] https://en.wikipedia.org/wiki/Noether%27s_theorem Chinjut 5 days ago Ring theory in general was founded in large part via her work. I always find it funny that everyone only makes such a fuss about one physics conservation law/symmetry correspondence she observed (for physical theories that happen to be phrased in terms of Lagrangians/Hamiltonians), while the immense mathematical achievement of her extensive work on ring theory goes largely undiscussed in these kinds of conversations. thanatropism 5 days ago Noether's theorem is kind of a grand closure of natural science. It's one of those theorems that gives the sensation of transcending its axiomatic setting and reveal deep philosophical truth. Compare Arrow's impossibility theorem, which is set in the much more familiar realm of order theory, but still appears to say something about the real world. Rerarom 5 days ago In my pure math studies there was way more emphasis on her work in commutative algebra than on the symmetry / conservation law correspondence. Btw a neglected founder of modern algebra is Steinitz, whose early work on fields (1910) inspired Noether's work on rings. mjfl 5 days ago It's because the physics is more important. i4t 6 days ago I'm amazed I understood all of this thanks to Leonard Susskind. mannykannot 6 days ago The scope of this theorem seems so broad and general, apparently covering any possible physical symmetry, whether or not conceived of yet, that I wonder if it could reasonably be called a metaphysical one. auntienomen 4 days ago Any continuous symmetry. Noether's theorem doesn't cover discrete symmetries. Also, only in classical physics. In the quantum realm, Noether's theorem is basically a tautology. Filligree 6 days ago In the same sense as quantum mechanics is, perhaps. It's more math than physical law, but the laws sit on top of it. Yes - the dabates over the rival interpretations are, I think, widely (though not by everyone) regarded as metaphysical. ouid 5 days ago Just as a thought exercise. Try to imagine a universe in which it was impossible to define conserved quantities. LargoLasskhyfv 5 days ago Nice site. Permanently bookmarked. throwlaplace 5 days ago now do gauge invariance loudmax 5 days ago Unfortunately, equations like this are hard to read if you aren't a fluent reader of LaTeX: $$\left.\frac{\partial L}{\partial q} \right|_{\tilde{q} = \dot{q}} = \frac{dp}{dt}$$ Perhaps there's a tumblr theme or extension that can convert LaTeX to HTML. lalaithion 5 days ago jordigh 5 days ago It's running mathjax to convert them. Are you perhaps blocking js? lidHanteyk 6 days ago Modern physics assumes symmetry and then searches for models which fit observations under the constraint of symmetry. However, we are starting to wonder whether the symmetries truly exist. Time, charge, parity, position, momentum; we hope that they have symmetries. wtallis 6 days ago That doesn't sound like a fair description. Physics doesn't simply assume symmetries exist. We've observed symmetries that hold up under all observational and experimental scenarios available to us. From that, it's quite reasonable to consider the possibility that those symmetries are truly universal, and to study models that treat those symmetries as universal. Those models have great predictive power for ordinary scenarios of the sort where we know from experience that the symmetries do hold up, and those models tend to be simpler than ones that break symmetry in exotic scenarios. When we actually find a broken symmetry (eg. space, time and momentum behaving oddly at speeds close to c), it's time to update those models, which then usually reveals a more subtle symmetry and conservation law. But there's no big trend in physics that casts doubts on all the important symmetries in physics. Even under as-yet undiscovered theories, the symmetries we're familiar with will always remain true in the limit as the conditions approach familiar everyday circumstances. Nobody is worried that concepts like time, charge, momentum, etc. will be discovered to be a silly, unfounded idea that the universe casually disregards. We're just not adding extra terms to the equations until there's a need for them. xorand 5 days ago I second this. Conserved quantities (by extension symmetries) do not compute. Car engines and refrigerators are both thermal engines, but the same conserved quantities are engineered (say programmed) to do different things. stareatgoats 5 days ago > Nobody is worried that concepts like time, charge, momentum, etc. will be discovered to be a silly, unfounded idea While not overly worried, I have to confess that I'm at least somewhat worried. And somewhat convinced that our present conceptions of theses things will one "day" be turned on their head, in a way that might also seriously affect our everyday conceptions of them. I would support adding a couple of properties to most if not all our established models; notably 1: known unknowns, and 2: unknown unknowns. It would seem a healthy antidote to the hubris that runs rampant among scientists, at least before they discover the ubiquity of these two properties (they usually do around the 50 year mark). andygates 5 days ago This is word salad in a math-heavy field. leereeves 6 days ago What does it mean to "wonder whether the symmetries truly exist"? Isn't that a question to be answered by experiment? blablabla123 6 days ago Perhaps "truly" also in the sense of to what precision. To the precision of measurement in experiments is usually fine :) When the theory is not measurable anymore, than it's questionable whether it's Physics or already Meta-Physics... Is the universe expanding? How could you tell? The attempt to answer these questions has tripped up everybody since Hubble and Einstein. gpderetta 6 days ago Hum, the universe is both predicted and observed to expand. BoiledCabbage 6 days ago > Hum, the universe is both predicted and observed to expand. Or the universe may not be expanding and time could be contracting. ben_w 6 days ago What does "time contracting" mean? jerf 5 days ago While I understand the similarity to what BoiledCabbage is saying to whackjob theories, BoiledCabbage is actually referencing real physics discussions that have occured in the past. Specifically, there is the problem of measuring the universe when you don't have anything not in the universe itself to do the measuring with. What if, for instance, the three dimensions aren't all the same size as it seems, but actually, x is five times larger than y or z and everything time things rotate, including all of your measurement apparatus, they actually grow and shrink? How would you detect such a thing? What if as you look out into the universe, all the constants actually shift in a way that time ends up running at a different speed, so instead of an expanding universe and light taking a long time to get to us, instead there is some sense in which the external universe "really is" younger, and the reason we witness ourselves in a bubble of old universe is simply the anthropic principle? These were all very serious questions around the time of relativity being nailed down, especially since one of the implications of relativity is precisely that you don't have any absolute rulers anymore. While overall the scientific consensus is the scientific consensus, the problem still remains to a lesser degree even today. There's a recent paper questioning whether or not the universe's acceleration rate is actually expanding, because they posit a systematic error in our understanding of the standard candle supernovas used to measure it. What if the "standard candle" turns out to be more dependent on the chemical makeup of the stars than was previously understood, which as the universe gets older the stars have more non-hydrogen in them, so as you look to the younger parts of the universe you misjudge how far away they are systematically? Your ruler was bent in a way you didn't realize, so the distances don't work, so it looks like the universe's expansion has been expanding and there must be this "dark energy" that ends up making the majority of the universe, when it could just be a chimera of our inability to reference absolute measurements. empath75 5 days ago If you find a symmetry violation, you'll probably get a nobel prize. Best of luck.
CommonCrawl
Open Access Original Research Experimental Vapor-Pressures and Derived Thermodynamic Properties of Geothermal Fluids from Baden-Baden Geothermal Field (Southeastern Germany) Misirkhan A. Talybov 1 , Lala A. Azizova 1 , Ilmutdin M. Abdulagatov 2, 3, * 1. Azerbaijan Technical University, Department of Thermal Engineering, Baku, Azerbaijan 2. Geothermal Research Institute of the Russian Academy of Sciences, Makhachkala, Dagestan, Russia Federation 3. Dagestan State University, Makhachkala, Dagestan, Russian Federation * Correspondence: Ilmutdin M. Abdulagatov Received: September 24, 2019 | Accepted: December 12, 2019 | Published: December 24, 2019 Recommended citation: Talybov MA, Azizova LA, Abdulagatov IM. Experimental Vapor-Pressures and Derived Thermodynamic Properties of Geothermal Fluids from Baden-Baden Geothermal Field (Southeastern Germany). Journal of Energy and Power Technology 2019;(4):27; doi:10.21926/jept.1904006. Background: In the present study, vapor-pressures of three geothermal fluids from Baden-Baden geothermal field (Kirchenstollen, Friedrichstollen, and Murquelle, southeastern region of Germany) were measured over the temperature range of 274–413 K. The combined expanded uncertainty of the temperature and vapor-pressure measurements at 95% confidence level with a coverage factor of k = 2 were estimated to be 0.01 K and 1–3 Pa at low and 10–30 Pa at high temperatures, respectively. The measured values of vapor-pressure were used to calculate other crucial derived thermodynamic properties of these geothermal fluid samples, such as enthalpy and entropy of vaporization and the heat capacity. Methods: The measurements were performed using two different methods and experimental apparatus: (1) absolute and differential static method which was used at low temperatures ranging from 274.15 to 323.15 K; and (2) absolute static method which was used at elevated temperatures ranging from 323.15 to 413.15 K. Results: The data obtained from the measurements were utilized to formulate Antoine and Wagner-type vapor-pressure equations. The effects of various ion species on the vapor-pressure of the geothermal fluids were studied. In addition, the measured vapor-pressure values were utilized to develop Riedel's type correlation model for natural geothermal fluids in order to estimate the contributions of the various ion species to the total experimentally-observed values of vapor-pressure. It was observed that the anions were increasing the vapor-pressure, while the cations were decreasing it, with the rates (magnitudes) of these increases and decreases being different and strongly dependent on the chemical nature of the ion species involved. Using the measured vapor-pressure data, the other key thermodynamic properties, such as enthalpy and entropy of vaporization and the heat capacity) of the geothermal fluid samples were calculated. Conclusions: The measured vapor-pressure values of the geothermal fluids were higher than the pure water values (IAPWS standard data) by 5.5%–25.4% for Kirchenstollen, 3.0%–11.4% for Friedrichstollen, and 5.3%–14.8% for Murquelle, depending on the temperature. The largest deviations (up to 11%–25%) were observed at low temperatures (approximately 277 K), while at high temperatures, the deviations were within the range of 3.0% to 5.5%. This could be attributed to the effects of soluble gas in the geothermal fluids. The soluble gases were observed to be strongly affecting the measured vapor-pressure of the geothermal fluids. The experimentally observed vapor-pressure was the result of the competition between the opposite effects of the anion and cation contributions. Enthalpy of vaporization; geothermal fluids; heat capacity; thermodynamic properties; vapor-pressure; water Geothermal energy has great potential worldwide [1,2]. In order to achieve the effective utilization of geothermal resources, precise thermodynamics and transport properties data are required for the initial resource estimates, production and reservoir engineering studies, and binary geothermal power cycle optimization. The energy characteristics of geothermal fluids would then be extracted directly from their thermodynamic property data [3]. Geothermal fluids are aqueous salt solutions that are heated by the natural heat flow from the earth (i.e., heated by natural hot rocks). High-temperature geothermal fluids with temperatures of approximately 120°C are generally used for electricity generation, while the low-temperature geothermal fluids (with temperatures below 60 °C) are used directly to supply thermal energy for applications such as agriculture, aquaculture, and space heating. Accurate thermodynamic property data of the geothermal fluids at power plant operating conditions are important [4]. Using these data along with the chemical composition of the geothermal fluids enables proper power plant dimensioning, especially the size specification of the heat exchanger, which is one of the main important components that determine the operational efficiency of the plant [5,6,7] and other geothermal energy utilization devices or the likelihood of scaling and/or corrosion development within the wells and the surface installations. The power plant design (energy production and equipment size) is considerably dependent on the thermo-physical properties of the geothermal fluid used. In order to utilize the geothermal resources as efficiently and economically as possible, and to ensure minimum disruption to the environment, modeling of the geothermal systems (multi-phase underground flows, phase transition processes in different reservoir zones, reservoir installations and wells, and geothermal engineering) is required, which in turn requires precise thermo-physical property data [8,9]. Modeling of the geothermal system assists in determining its natural (prior to exploration) state and its behavior under exploration [5,6,7,10,11,12]. The application of geothermal fluids for providing direct heat and for electricity generation requires reliable thermodynamic and transport property data which would determine the energy input of the plant, as was demonstrated in previous studies [13,14,15]. The total heat content (energy amount) of the geothermal fluid is dependent on its density, temperature, and heat capacity, as reported by Schröder et al. [10,11,12]. Imprecise knowledge of the geothermal water properties (isobaric heat capacity) leads to inaccurate knowledge of the geothermal water heat content, and in turn to incorrect knowledge of the heat input to geothermal power plants. Geothermal fluids consist of complex mixtures of water, salts dissolved in a liquid phase, and dissolved gases [16]. Pure water is the main component of this mixture. Geothermal brines are mainly sodium chloride solutions. Sodium chloride (NaCl) typically constitutes 70%–80% of the total dissolved solids (TDS) in geothermal brines. Calcium is the other major cationic constituent of the geothermal brines. Chloride ion is the only major anionic constituent of these brines, while their second most important anionic ingredient is the bicarbonate ion. Due to the lack of thermodynamic properties of the data, in most cases, geothermal fluids were modeled as pure water [17] or as binary (H2O+NaCl) [18] and ternary [19] aqueous salt solutions. Anderson et al. [20] developed a PVT model to predict the thermodynamic properties of a prototype geothermal fluid represented by the three-component H2O+CO2+NaC1 mixtures. The properties of these mixtures were used in numerical simulations developed for the natural geothermal fluids and as reference data for designing the power plant and its components [20,21]. Seawater is the natural fluid that is the most similar to geothermal water because its main ionic constituents are similar to those in the geothermal fluids. The properties of seawater have been studied widely by several authors (see, for example, [22]). Besides the dissolved solids, geothermal fluids may contain considerable amounts of gases. The main representatives of non-condensed gases in the geothermal systems are CO2, CH4, H2S, N2, and H2. The contents of salts and dissolved gases significantly alter the reservoir geothermal performance. Therefore, it is crucial to consider the effect of the constituent salts and dissolved gases on the thermo-physical properties of natural geothermal fluids [23]. The previous publications by our research group [24,25,26,27] report the experimental study of the density, speed of sound, heat capacity, and the viscosity of natural geothermal fluids obtained from various geothermal wells (Russia and Azerbaijan) and having different chemical compositions (varying with the locations). The present paper reports the continuation of the study on the thermodynamic and transport properties of these natural geothermal fluids. A detailed review of the previous studies on the properties of geothermal brines was provided in the recent publications by our research group [24,25,26] (see also, Schrüder et al. [10,11,12]). Limited thermodynamic property data for natural geothermal brines have been published so far. Most of the reported data are for the geothermal brines having only binary or ternary aqueous salt solutions as their main components (basically for the synthetic geothermal brines). Schrüder et al. [10,11,12] proposed in-situ measurement techniques for the accurate measurement of the key physico-chemical properties of natural thermal water, such as isobaric heat capacity, density, and kinematic viscosity, at plant operating conditions, thereby avoiding the risk of changes in the water composition as a consequence of sample collection. Owing to the scarcity of data on the thermodynamic properties of geothermal fluids, an approach different from the one used previously for estimating these properties was adopted in certain studies [21,28,29,30,31,32,33,34,35,36]. Wahl [37] proposed a linear function of salt concentration correlation for the density of geothermal brines. The simplest way to determine the thermodynamic properties of the geothermal fluids is the one based on pure water properties, because pure water is the dominant constituent, governs the properties (thermodynamic behavior) of the aqueous salt solutions and geothermal brines [38,39,40,41,42]. However, using direct experimental thermodynamic data of the particular natural geothermal fluid being studied allows minimization of the errors arising from empirical data prediction in geothermal brine models. Moreover, the brine composition may be altered during production. Therefore, direct measurements of natural geothermal brines from various regions (wells) throughout the world containing various concentrations of dissolved electrolytes are required. This would allow the generalization of the properties of various geothermal fluids from various geothermal fields (locations) containing various solutes and would assist in developing prediction models for geothermal brines with any chemical composition. Unfortunately, the currently available theoretical models are often unable to describe the real systems that are in practice. For instance, accurate prediction of the thermodynamic and transport properties of complex multi-component ionic aqueous solutions such as geothermal fluids is extremely difficult due to their complexity. In the microscopic point of view, the effect of individual ionic contributions to the properties of the aqueous solution is dependent on the structure of the ions (shape, size, ionic environment, polarization orientation, ion mobility, etc.). Even for the binary aqueous salt solutions, it is quite difficult to accurately estimate the effect of ions on their properties. It is almost impossible to accurately estimate the effect of all the dissolved salts on the properties due to the extremely complex interactions among the salt ions, dissolved gases, and the water molecules and the ion-ion interactions in the multi-component aqueous solutions. Therefore, accurate thermodynamic data for natural geothermal fluids are of interest to the scientific researchers for the study of the fundamental physico-chemical basis of the theory of multi-ionic interactions on the microscopic level. Models with better predictive abilities may be developed on the basis of reliable direct experimental information on the thermodynamic and transport properties of natural geothermal brines. However, experimental study of the thermodynamic properties of each geothermal fluid is a formidable task, and therefore, theoretical or semi-empirical models that would be able to predict the thermodynamic properties of complex geothermal brines would be useful. In order to quantitatively describe the thermodynamic and transport properties of geothermal fluids as a function of T, P, and x, a thermodynamic model (equation of state) or a reference correlation model for the transport properties would be required. In addition, direct measurements of the thermo-physical properties of natural geothermal brines with complex compositions are required to confirm the applicability and accuracy of the mixing rules. 2.1 Characteristics of Geothermal Field Location and Wells The three geothermal fluid samples used in the present study were collected from three geothermal hot wells in Baden-Baden geothermal field, Germany [Kirchenstollen (48°45'47.60" N, 8°14'29.17" E), Friedrichstollen (48°45'49.40" N, 8°14'31.35" E), and Murquelle (48°45'48.62" N, 8°14'33.66" E)]. The geothermal field is located in the eastern part of the Rhine (Baden-Baden) trough. The area map (locations of the hot wells) with the potential for hydrogeothermal exploitation in Germany may be obtained from certain previous reports [43,44]. The depths of the wells from which the samples were obtained ranged from 1200 to 1800m. The well-head temperatures Twh were within the range of 64.5–69 °C. The debit was approximately 800,000 L/day (9.26 L/sec). The geothermal gradient in this region varied from 3 to 10 °C/100 m. The most important geological settings for geothermal energy in Germany arethe deep Mesozoic sediments, which may be located in the North German Basin, the Upper Rhine Graben, and the South German Molasse Basin [43]. Several projects are under development in the Upper Rhine Graben, which is one of the regions with hydrogeothermal potential. Above-average geothermal gradients render this region interesting for the development of electricity projects [45]. The new 5 MWe ORC plant of Insheim in the Upper Rhine Graben began producing geothermal electricity in November 2012, and heat extraction is planned in the further development of the project [45]. Most of the geothermal plants are located in the Molasse Basin in southern Germany, along the Upper Rhine Graben. The main objective of the present study was the accurate measurement of the vapor-pressure of three natural geothermal fluids obtained from the Baden-Baden geothermal field (Kirchenstollen, Friedrichstollen, and Murquelle, Germany) as a function of temperature in the range of 274.15–413.15 K. In the present work, a detailed experimental study of the effects of dissolved salts (salinity, and therefore, the effect of location), nature of the chemical composition, and soluble gases on the temperature behavior of vapor-pressure of the geothermal brine samples collected from the Baden-Baden geothermal field was performed. The present work provides accurate vapor-pressure and a few derived key thermodynamic properties data (enthalpy and entropy of vaporization and the heat capacity) for the three natural geothermal brines with different mineralogical compositions, collected from the Kirchenstollen, Friedrichstollen, and Murquelle hot wells in Germany. The study area (Baden-Baden) has great geothermal potentials, holds the highest position among the places for balneological treatment and society events, and is one of the most prestigious and historic thermal spas in Germany. The existence of these hot springs has been associated with the deep faults located at the eastern end of the Upper Rhine Graben. The location of the springs was in use in the middle of 19th-century post-1868, and a system of tunnels had been constructed to catch the springs and to increase the production and temperature. The system consisted of two main tunnel areas, one just below the castle with "Friedrichstollen" as the main tunnel, and the other close to the marketplace with "Kirchenstollen" and "Rosenstollen". A new, large bathing facility, the "Friedrichsbad", was built later. The geothermal gradient in the second of the deep holes was measured to be 5.1 °C/100 m. This appeared promising for the use of geothermal energy if either water could be found or the technologies from the Hot-Dry-Rock development could be used. Currently, the activities in Baden-Baden are dominated by two major bathing facilities. One of these two facilities is the traditional Friedrichsbad, which has been serving to provide relaxation and healing for greater than a century. Friedrichsbad receives its supply of thermal water from the traditional hot springs as well as from the two wells drilled in the 1960s. The thermal water is also delivered to three public drinking fountains and to several private users (such as hotels for hot bathing, hospitals for healing purposes, etc.). The annual consumption of thermal water from Friedrichsbad is 83,621 m3/year. Geothermal district heating is accomplished with two or more geothermal wells, with at least one serving as a production well and one serving as an injection well. Re-injection of the cooled geothermal fluids is necessary to maintain the pressure in the reservoir and to avoid the contamination of surface waters or the shallow aquifers with high salt loads or toxic fluid constituents. Several hot springs supply thermal water to the spa facilities, with temperatures ranging from 52 to 67 °C and mineralization within the range of 2680–3522 mg/kg. The total thermal water production in Baden-Baden is 9.4 L/s. The thermal water has an energy content of 2 MW, although complete energy use has not been achieved so far [4]. 2.2 Sample Description Thermodynamic properties of natural geothermal fluids are affected strongly by their chemical composition (concentration and the type of salt and gas contents, as stated earlier). Geothermal fluid is a brine solution formed as a result of the natural movement of water through the crust of the Earth. The brine compositions vary from well to well, depending on the depth of the production and the temperatures of the different parts of the reservoir [46] which lead to the precipitation of certain components (phase-equilibrium behavior of brine at different pressures and temperatures). Therefore, the geothermal fluids collected from different wells have different chemical compositions, and their properties also vary. Studies conducted on the composition of dissolved ions in the geothermal fluids [24,25,26] have indicated considerable variations when moving from one area to another. The chemical compositions of the geothermal fluid samples collected from the Kirchenstollen, Friedrichstollen, and Murquelle hot wells of the Baden-Baden (Germany) geothermal field are listed in Table 1. An IRIS Intrepid II Optical Emission Spectrometer and ion chromatography techniques were utilized for the quantitative determination of the elemental composition (cations and anions) of the geothermal brine samples. The accuracy ranged between 0.2% and 1.0%. The elements were ionized in the argon plasma flame and were analyzed using a high-resolution mass spectrometer. As observable from Table 1, the total mineralization values for the geothermal fluid samples from the Kirchenstollen, Friedrichstollen, and Murquelle wells were 2.74 g/L, 2.60 g/L, and 2.75 g/L, respectively, i.e., all the values were almost equal. The main chemical composition distributions for the hot-wells obtained on the basis of the data in Table 1 are presented in Table 2. As it may be noted from Table 2, the main components of the geothermal fluid samples were: chloride (52.9% to 55.9%), sodium (26.1% to 28.7%), sulfate (6.7% to 6.8%), calcium (3.9% to 4.4%), potassium (2.2% to 2.4%), and Si and S (1.9% to 2.1%). Therefore, the major mineral components in the studied geothermal fluid samples were Cl-, Na+, SO4-2, Ca+2 , K+ , Si+, and S+. Salinity was derived mainly from Na+, K+, Ca+2, Si+, and S+, and from SO4-2 and Cl- ions, all of which together comprised approximately 71% to 73% of all the compounds in the fluid solution. All the samples exhibited similar concentrations of Na+, Ca+2, K+, Si+, S+, SO4-2, and Cl-, indicating relatively homogenous compositions at that depth. The pH-values of the samples measured on the surface immediately after pressure release varied between 6.43 and 7.47. The composition of a particular well was observed to vary as a function of the total production time, the rate of flow, and the nature of the underlying sediments. Table 3 compares the main chemical compositions of the geothermal fluid sample from hot-well Friedrichstollen as determined in the present study with the data reported by Sanner [4] in 2000. As observable from Table 3, there was a slight increase in the sodium (by 14%), potassium (19%), lithium (27%), calcium (12.5%), chloride (4.6%), and nitrate (46%) contents, and a decrease in the magnesium (by 90%) and sulfate (14.7%) contents. The main gas contents [together constituting approximately 90% to 95% of the gas content] in the samples were nitrogen, carbon dioxide, methane, hydrogen, argon, helium, and oxygen. The carbon dioxide content in the samples was approximately 132 mg/L (or approximately 5%). Prior to the measurements, the geothermal brine samples were filtered for the removal of suspended solids using filters of 2-micron pore size. Table 1 Chemical compositions of the geothermal brines. Kirchenstollen Friedrichstollen Murquelle (mg/l) Cations Al1862 Ni2316 Ti3349 Total dissolved salt Table 2 Distribution of main chemical composition for the geothermal fluid samples. Table 3 Comparison of the main chemical composition for geothermal fluid samples for hot-well Friedrichstollen. (this work) (Sanner[4]) Cations (mg/l) Anions(mg/l) 2.3 Vapor-Pressure Measurements As stated earlier in the Introduction section, vapor-pressure is a property of natural water that is highly sensitive to salt and gas concentrations. In order to achieve an accurate estimation of salinity based on its relationship with vapor-pressure, which is comparable to the accuracy achieved in the direct measurement of salinity, a vapor-pressure uncertainty of <3 Pa at low temperature and <30 Pa at high temperature is required. In this context, substitution measurement involving two methods has been proposed in the present study. The measurements were performed using two different experimental techniques and apparatus: (1) absolute and differential static method used at low temperatures ranging from 274.15 to 323.15 K; and (2) the absolute static method used at elevated temperatures ranging from 323.15 to 413.15 K. The vapor-pressure measurements of the geothermal fluid samples collected from the Baden-Baden geothermal field (Germany) for the present study were performed at the Department of Technical Thermodynamics, Rostock University. The main part of the experimental apparatus (Figure 1) for the vapor-pressure measurements at low temperatures (ranging from 274.15 to 323.15 K) consisted of glass cells–3,4, and 27. These glass cells comprised inner and outer volumes in which the distilled water supplied from thermostat–21 (Lauda Gold R–415, Germany) flowed. The temperature inside these measurement cells was achieved and stabilized by using a thermostat–21 with an uncertainty of 0.01 K. The temperature was measured using PRT–100–6 and 35, with an uncertainty of 0.01 K. The volume of each glass cell was approximately 80 cm3. In case of solutions with low concentration, for which the vapor-pressure was quite close to the vapor pressure of the solvent (i.e. if the vapor-pressure difference was approximately 10 to 30 Pa, within the uncertainty range of the static cell), the measurements were performed in the differential part–3 and 4 of the experimental apparatus. In this part, both the cells were immersed in the same water reservoir. The temperature in the reservoir was measured using PT–100–6 (Figure 1), with an uncertainty of 0.01 K. The vapor-pressure values in the measurement cells were measured using high-precision pressure meters–10,MKS Baratron type 616A (USA) in the differential part of the apparatus with an uncertainty of 1–3 Pa and MKS Baratron type 615А-23 (USA) in the static part with an uncertainty of 10–30 Pa. These measurements were performed at the temperature of 333.15 K within the reservoirs-11 which were thermostated to maintain a constant temperature. Water was supplied to the reservoir through a thermostat Haaki (Germany). The derived results were transmitted to the pressure signal indicators–13 and 14 (Figure 1) through the adapters–12 and 15, and then directly to the computer–34 using the LabVIEW software. The measured temperature values were also transmitted to the same computer–34. The computer-controlled the stabilization of the vapor-pressure at a given temperature, and after measuring the pressure, it changed the temperature of the experiment to the next one at a given interval to the maximum value. When the maximum temperature of the experiment was reached, the computer changed the temperature in the opposite direction, and the vapor-pressure was then measured to the minimum temperature. Therefore, a repetition of the experimental points occurred. The repeatability could also be established in advance using a computer software system. Prior to the commencement of the measurements, the measurement cells were washed thoroughly with water and acetone, followed by being vacuumed using a vacuum system–31 to 33 (Figure 1). The magnets–2 and 36 located in the measurement cells were kept rotating inside the cells using magnetic stirrers–1 and 28, which assisted in reaching the equilibrium condition (stabilization of the liquid–vapor system). Since the measurement cells and the pressure meter MKS Baratron were located at a distance of 40 cm from each other, in order to connect them, a special design had to be used. Therefore, special adapters «glass–metal» from MDC vacuum Limited (England) were welded to the glass measurement cells, and the special nozzles from VAT Deutschland GmbH were welded to the metallic part of the adapters «glass–metal». In-between these nozzles, sealing discs with a rubber gasket were mounted. The second nozzle was connected with the pressure meter using a capillary tube. This design lowered the uncertainty in the vapor-pressure measurements. However, it could not completely eradicate all the possible sources of uncertainty. The design could be evaluated for accuracy of measurement by using the standard fluids, such as water, alcohol, or hydrocarbons, the vapor-pressures for which are well-known (REFPROP/NIST [47]). In order to avoid loses, the portion between the glass measurement cells and the pressure meter MKS Baratron was heated using electrical (8 and 25) and water (9 and 24) heaters. The accuracy and reliability of the measured vapor pressure data for the studied geothermal fluids, as well as the correct operation of the experimental apparatus, were verified by measuring the vapor-pressures of the well-studied standard fluids, such as pure water, methanol, ethanol, acetone, etc., for which reliable reference data are available (REFPROP, NIST [47]). Approximately haft of the measurement cell was filled with the sample to be studied. The special flask connected to a metal tip–7,20, and 38–was used to fill the cell. After filling the measurement cells with the sample to be studied, part of the sample evaporated to form the vapor phase, creating saturated vapor pressure above the liquid phase. After a little while, when the system in the measurement cell approached the equilibrium condition, the vapor-pressure measurement was begun. Since geothermal fluids consist of complex mixtures of water, salts dissolved in the liquid phase, and dissolved gases, it is crucial to consider the effects of salts and dissolved gases on the measured vapor-pressure of natural geothermal fluids. Therefore, the experiments conducted with the geothermal fluids in the present study actually measured the vapor-pressure of the aqueous salt solutions with a vapor phase consisting of a mixture of water vapor and soluble gases. Therefore, the saturated vapor pressure of the geothermal fluids was the sum total of the saturated vapor-pressures of water and soluble gases. The soluble gases exerted a strong influence on the measured vapor-pressure of geothermal fluids. Figure 1 Schematic diagram of the experimental apparatus for vapor-pressure measurement at low temperatures (from 274.15 to 323.15 K). 1, 28, and 30-magnetic stirrers; 2 and 36-magnets; 3-measuring cell for differential method of vapor-pressure measurements for pure water; 4-measuring cell for differential method of vapor-pressure measurements for the sample; 5 and 37-valves for closing of the measuring cell of the differential and static-26 methods for the vapor-pressure measurement of the sample; 6, 35, 39, 40-PRT with a four-channel input module for receiving and accumulating the temperature data (Омега PT–104A–19); 7, 20, and 38-connections for sample filling; 8-electrical heating of the cell connections with the pressure meters MKS Baratron 616Аand 25-MKS Baratron615 А; 9-water heating cell connections with pressure gauges MKS Baratron 616Аand 24-MKS Baratron615 А; 10-pressure gauge MKS Baratron 616Аfor differential method and 23-MKS Baratron 615Аstatic method; 11-reservoir for maintaining a constant temperature for the pressure meter in the differential and 22-in static methods; 12-connector of the pressure signal with pressure indicator for differential and 15-static methods; 13-pressure signal indicator for differential and 14-static methods; 16-thermostat HAAKE F5; 17-electric heater control systems for differential and 18-static methods; 21-thermostat Lauda Gold R–415; 27-measuring cell for static method; 29-flask with a sample under study for filling; 31-vacuum indicator TTR100; 32-liquid nitrogen trap; 33-vacuum pump; 34- PC for control. The above-described experimental apparatus (glass cells) cannot be used for the measurements of vapor-pressure at high temperatures (above atmospheric pressure). Therefore, a novel apparatus design was developed for the vapor-pressure measurements at temperatures ranging from 323.15 to 413.15 K (Figure 2). This newly designed experimental apparatus consisted of metallic cells prepared from stainless steel V4A (Germany). The inner volume of each cell was approximately 140 cm3, which included the volumes of the cell and the connecting tubes as well as haft the volume of the valve–10 (without the volume of PT–2 and 3 inside the cell). The measurement cell located inside the reservoir–12 was filled with silicon oil (КORASILON oil M50, Kurt Obermeier Gmb H&Co. KG, Germany). The desired temperature in the measurement cell was achieved using a thermostat–1 (LAUDA ECO RE 415 G, Germany) with an uncertainty of 0.01 K. The temperature inside the cell was measured using two platinum resistance thermometers–2 and 3 (PRT–100, 1/10 DIN Class B, Temperatur Messelemente Hettstedt GmbH, Germany). PRT–100 was connected to a four-channel input module for receiving and accumulating the temperature data–5 (Омега PT–104A, Omega Engineering, Inc., USA). One of the two thermometers was connected to the thermostat for direct transmission of information. Using this thermometer, the thermostat created the measured temperature directly inside the measurement cell and not in the thermostat itself. This wa scrucialas this method allowed creating any desired temperature with high accuracy directly inside the measurement area. The second thermometer (PRT–100) transmitted the measured temperature to the computer. The vapor-pressure was measured using the pressure meter–4 (SERIE 35 × HTC, Omega Gmb H&Co., Germany) with an uncertainty of 2,000 Pa. After each time interval, which was predetermined (usually 1 min), the vapor-pressure of a sample was measured and transmitted to a computer. Stationary operation with a constant temperature was achieved in approximately 50–70 min. This method and the described experimental apparatus have already been used successfully for the one-phase PVT and two-phase vapor-pressure measurements of a natural water sample from the Azerbaijan geothermal field in a previous study conducted by our research group [26]. Figure 2 Schematic diagram of the experimental apparatus for vapor-pressure measurement at elevated temperatures (from 323.15 to 413.15 K). 1-thermostate LAUDA ECO RE 415 G; 2 and 3 PRT; 4-pressure measurement unit (pressure transducer) SERIE 35 × HTC; 5-four-channel input module for receiving and accumulating temperature data Омега PT–104A; 6-PC control; 7-pressure indicator; 8-flask with a sample under study for filling; 9 and 10-valves; 11-measuring cell insulation; 12-reservoir for maintaining a constant temperature in the measuring cell; 13-measuring cell; 14-magnet; 15-magnetic stirrer; 16-vacuum indicator TTR100; 17-liquid nitrogen trap; 18-vacuum pump. In the present study, vapor-pressure values of the three geothermal fluid samples collected from Baden-Baden geothermal field (Kirchenstollen, Friedrichstollen, and Murquelle, Germany) were determined as a function of temperature over the temperature range of 274–413 K. The measured vapor-pressures for the geothermal fluid samples are listed in Table 4 and depicted as a function of temperature and ion species concentration in Figure 3 and Figure 4, respectively. Figure 3 also includes the vapor-pressures for pure water calculated using the IAPWS formulation (Wagner and Pruß [48]). As observable from Figure 3, the vapor-pressure data for the geothermal fluid samples exhibited the temperature behavior (P-T) similar to pure water (Wagner and Pruß [48]). However, the measured vapor-pressure values for the geothermal fluids were higher by a factor of 3%–25% compared to those for pure water, depending on the temperature of measurement. The differences (percentage deviations) between the measured vapor-pressure values for the geothermal fluids obtained in the present study and the pure water values published previously (Wagner and Pruß [48]), as a function of temperature, are presented in Figure 5. As visible in the figure, the deviations varied with temperature within the range of 5.5%–25.4% for Kirchenstollen, 3.0%–11.4% for Friedrichstollen, and 5.3%–14.8% for Murquelle, all of which were considerably higher than the corresponding experimental uncertainties of 0.02% to 0.08%. The maximum deviations (in the range of 11%–25%) were observed at low temperatures (approximately 277 K), while at high temperatures, the deviations were within the range of 3.0%–5.5%. As may be noted from Figure 3, the measured values of vapor-pressure were higher than those in the reference data for pure water, although, for most of the aqueous salt solutions, vapor-pressure was lower than that of the pure water (see Figure 3 for H2O+NaCl solution [49,50,51]). This could be attributed to the effect of soluble gases present in the geothermal fluids. It is well-known that soluble gases strongly affect the measured vapor-pressure of the geothermal fluids [20,21] (see below). Large differences in the range of 2.6%–18.7% were observed between the vapor-pressure data for the Kirchenstollen and the Friedrichstollen samples, while the data for the Kirchenstollen sample deviated from those for the Murquelle sample by 0.1%–14%. The relatively low difference in the range of 2.5%–3.9% was observed between the Friedrichstollen and the Murquelle geothermal fluid samples. Vapor-pressure is a property that exhibits relatively higher sensitivity to salt and gas concentrations compared to the other thermodynamic properties such as density, heat capacity, and speed of sound (see, for example, Abdulagatov et al. [24,25,26]). Therefore, the vapor-pressure data measured at standard conditions may be utilized for the determination of the salt content and its variations in a geothermal fluid, i.e., accurate measurements of the vapor-pressure–salinity relationship may be applied in the estimation of salt concentration. Measurement of thermophysical properties has been frequently used to evaluate the quality of products; in particular, properties such as density, vapor-pressure, and viscosity are highly sensitive to the composition of a product. The quality of natural water may be determined through its physical, chemical, and microbiological properties. It is important to establish the quality of the natural water sources that are to be used for different purposes, in terms of specific water-quality parameters that would affect the possible use of water the most. Vapor pressure is a crucial and highly sensitive property of the liquids (in particular, natural water) for their quality analyses (for example, composition changes), i.e., it is the most sensitive indicator of any changes in the quality of natural water. The difference between the vapor-pressure of distilled pure water and natural water is of the order of just a small magnitude depending on composition (see, for example [27]). The more accurate the vapor-pressure measurement, the more accurate is the determination of salinity (by means of the vapor-pressure–salinity relationship). Table 4 Measured values of temperature (T / K) and vapor-pressure (P / kPa) of the geothermal fluids from Baden-Baden Geothermal Filed (Germany). T/K P/kPa T / K Standard uncertainties u are: u(T)=0.005 K; u(P)=0.01 % at low temperatures (<323 K) and u(P)=0.04 % at high temperatures (>323 K) (level of confidence=95 %). Figure 3 Detailed view of the temperature dependence of the measured values of vapor-pressure of geothermal fluids from Baden-Baden Geothermal Filed together with pure water values (solid lines) (IAPWS fundamental, Wagner & Pruß [48]) in distinct temperature ranges. ○-Kirchenstollen; ●-Friedrichstollen; ´-Murquelle; ♦-H2O+NaCl [49]; r- H2O+NaCl [50]; □- H2O+NaCl [51]. The dashed line is calculated from correlation for H2O+NaCl + CO2 mixture [20]; Dashed-dotted line is the values of vapor-pressure calculated from correlation for H2O+N2 mixture [20]. Figure 4 Effect of various ion species on the vapor-pressure of geothermal fluids along the different isotherms. (a)- Cl-; (b)- Na+; (c)- SO4-2; (d)- total ions concentration; (e)- Ca+2; (f)- K+;□–343 K; ●–373 K; ´–383 K; r–393 K; ♦–403 K; ▲–413 K. Figure 5 Percentage deviations between the present measured vapor-pressures for geothermal fluids and pure water (IAPWS formulation [48]). Solid line is Kirchenstollen; Dashed line is Friedrichstollen; and Dashed-dotted line is Murquelle. The pressure from the dissolved gasses escaping the liquid phase adds to the total vapor pressure of a solution. If the water contains a huge amount of dissolved gasses, such as CO2, then the vapor pressure of the solution is greater than that of the pure water. The individual vapor pressures of all ingredients of the solution contribute to the total vapor pressure of the solution. Vapor pressure is a property of a liquid and is dependent on two factors: one is temperature and the other is the presence of solutes or other liquids that interact significantly with the liquid. In the case of geothermal fluids, the liquid phase is formed by water and a group of non-volatile solutes and certain dissolved gases such as CO2 or others (see above), which interact strongly with the water molecules. In regard to a solution with one liquid and several non-volatile solutes (salts, for example), Raoult's law states that the vapor pressure of an impure solution is always lower than that of a pure solvent. Therefore, an aqueous salt solution has a vapor pressure value lower than that of pure water. Figure 3 also presents the values of experimental vapor-pressures for the ternary mixture of H2O+CO2+NaCl which were reported by Anderson et al. [20] (see also, [21]). It may be noted that the vapor-pressure of the ternary aqueous system of H2O+CO2+NaCl containing dissolved CO2 was considerably higher than those of pure water and the aqueous solution of H2O+NaCl. In general, for the vapor-pressure measurement, geothermal fluid may be modeled as a few basic primary aqueous salt solutions (depending on the basic component of the geothermal brine) and certain dissolved gases, using appropriate mixing rules. It is apparent that the difference in the vapor-pressure of various geothermal fluid samples was the result of the differences in their composition, i.e., it was the effect of the concentrations of constituent salts and dissolved gases. However, in certain cases, even the geothermal fluids with the same values of total mineralization (salinity) exhibited sufficient difference in their vapor-pressure values and other thermodynamic properties. This implied that the type of constituent ions also affected their properties to a considerable extent, i.e., the thermophysical characteristics of the geothermal fluids are dependent on, in addition to the total salt content (total mineralization), the chemical nature of the constituent ion species. For instance, the total mineralization values for the samples collected from Kirchenstollen and Murquelle were almost equal: 2.74 g/L and 2.75 g/L, respectively; however, their vapor-pressure values differed by 0.1% to 14%. This was probably caused by the differences in their sulfur, silicon, calcium, and/or dissolved gas contents. The mechanism through which the chemical nature of the ion species and soluble gases in the geothermal fluids affect their total measured properties remains unclear. Figure 4 illustrates the effects of various ion species on the vapor pressure of the geothermal fluid samples along various isotherms. As may be noted in the figure, different types of ion species affected the vapor pressure differently. For instance, Cl- and Na+ ions exerted opposite effects on the vapor-pressure, i.e., Cl- was observed to be increasing the vapor pressure, while Na+ was decreasing it. In general, as inferred from Figure 4, anions were increasing, while the cations were decreasing the vapor-pressure. Certainly, the rate,$\left ( \partial P_{S}/\partial c_{i} \right )_{T,C_{j\neq i}}$ , of these increases or decreases caused by the ion species was different and was strongly dependent on the chemical nature of the ion species. For instance, the same concentrations of Cl- and Na+ ions exerted different effects on the measured vapor-pressure; at a temperature of 413 K, the value of $\left ( \partial P_{S}/\partial c_{i} \right )_{T,C_{j\neq i}}$ for Cl- ions was 57.2 kPa/(g/L), while that for the Na+ ions was −350.02 kPa/(g/L). Overall, the resultant effect of the various types of ion species was dependent strongly on the multi-ionic interactions between the different types of ion species and the water molecules. The experimentally determined vapor-pressure value was the result of the competition between the opposite effects of anion and cation contributions. As stated earlier, from the microscopic point of view, the effect of individual ion contributions to the vapor-pressure of a geothermal fluid was dependent on their structure (shape, size, ions environment, polarization orientation, ion mobility, etc.). Owing to the complexity of the multi-ionic interactions among the various types of ion species in the geothermal fluids, it is impossible to theoretically predict the temperature and concentration dependence of vapor-pressure and the other properties. Therefore, this evaluation is empirical and based solely on the measured data. In the present study, the vapor-pressure data for the geothermal fluid samples were fitted to the following correlation equation: $$P_{S}(T,c_{i})=P_{SH_{2}0}(T)\left [ 1+\sum_{i=1}^{n}a_{i}c_{i} \right ]$$ where $P_{SH_{2}0}(T)$ is the vapor-pressure of pure water (IAPWS formulation, Wagner and Pruß [48]), at temperature T; n is the number of the types of ion species; ai is Riedel's characteristic constant of the ions for each ion species, and ci is the concentration (g/L) of the i-thion species. In the present study, seven main components (ion species) were selected for the geothermal fluid samples: Cl-, Na+, Ca+, K+, SO4-2, S+, and Si+. All the measured data for the three geothermal fluids were fitted to Eq. (2). The derived values of the fitting parameter ai (Riedel's ion constants) are listed in Table 5. Riedel's model Eq. (2) predicts the measured values of vapor pressure for all the geothermal fluid samples within 3.8%. The values of the ion characteristic constant ai defined the contribution of each ion species to the total experimentally-observed values of vapor-pressure. This model may be used to predict the vapor-pressure of any geothermal fluid sample with the main components of Cl-, Na+, Ca+, K+, SO4-2, S+, and Si+, using only pure water data and concentration of the ion species. Riedel [38] used the same correlation model for the thermal conductivity of multi-component aqueous salt solutions, achieving a good prediction agreement (within 5%) with the experimental data for several aqueous salt solutions [39,40,41,42,52,53,54,55,56,57,58,59,60,61]. Several authors have examined the accuracy and predictive capability of the Riedel's model (see also, review by Horvath [39]). In the previous publications by our research group [24,25], the Riedel's model Eq. (2) was successfully applied for the density, speed of sound, and viscosity correlation for natural geothermal fluids. Table 5 Values of fitting coefficients (ion species characteristic constants) ai for Riedel vapor-pressure correlation models Eq. (2) for the main ion species. ai/ (l/g) Si+ St. D Max. D 3.1 Antoine and Wagner-Type Equations for the Vapor-Pressure of Geothermal Fluids The measured vapor-pressure values of the geothermal fluid samples were used to develop three constant Antoine and multi-parametric Wagner-type [62] correlation equations, for practical applications. $$lnP_{S}=A-\frac{B}{T-C}$$ $$ln\left [ \frac{P_{S}}{P_{ref}} \right ]=\frac{T_{ref}}{T}(A\tau +B\tau^{1.5}+C\tau^{3}+D\tau^{3.5}+E\tau^{4}+F\tau^{7.5})$$ Where A, B, C, D, E, and F are the fitting parameters; Tref and Pref are the adjustable reference parameters; and τ = 1-T/Tref is the reduced temperature difference. In case of pure liquids (for example, water), Tref and Pref represent the critical temperature and pressure, respectively. Since the critical parameters for geothermal fluids were unknown, Tref and Pref were considered adjustable parameters (or pseudo-critical parameters). The optimal values of the derived parameters in Eqs. (3) and (4) are presented in Table 6 and Table 7. Wagner-type correlation model represented by Eq. (4) has been successfully used previously by several authors to provide vapor-pressure data for a series of pure fluids (see, for example [63]). This equation has also been applied for pure water [62]. The derived values of the pseudo-critical parameters for the geothermal fluids, in the present study, were as follows: Kirchenstollen, Tref=647.2K and Pref= 20,000kPa; Friedrichstollen, Tref = 646.4K and Pref = 20,500kPa; and Murquelle, Tref = 646.4 K and Pref= 20,500kPa. As may be noted, the values of Tref and Pref for the studied geothermal fluid samples are close to those for pure water reported in the results of previous studies [62]. Table 6 and Table 7 also present the deviation statistics between the measured and the calculated values of vapor-pressure for the geothermal fluid samples. The Average Absolute Deviation for all the studied samples were within the range of 0.01%–0.03%, consistent with the corresponding experimental uncertainty. Table 6 Values of fitting coefficients for Antone-type vapor-pressure correlation Eq. (3). Hot-wells Friedrichsstollen AAD (%) Bias (%) St.D (%) St.Err (%) Max.D (%) Table 7 Values of fitting coefficients for Wagner-type vapor-pressure correlation Eq. (4). The derived vapor-pressure correlations (3) and (4) were utilized for the calculation of the other crucial thermodynamic properties of the studied geothermal fluids, such as enthalpy ∆Hvap and entropy ∆Svap of vaporization and the isobaric heat capacity ∆Cp, using following thermodynamic relations: $$\Delta H_{vap}=T\Delta V_{S}\frac{dP_{S}}{dT}$$ $$\Delta S_{vap}=\frac{dP_{S}}{dT}\Delta V_{S}$$ where ∆Hvap and ∆Svap are the enthalpy and entropy of vaporization, respectively; PS and ∆VS= V''-V' are the vapor pressure and the specific volume changes upon vaporization (vapor and liquid specific volumes at saturation), respectively, caused due to the change of phase; and $\frac{dP_{S}}{dT}$ is the thermal-pressure coefficient in the saturation curve. Since the volume of vapor is much higher than the volume of liquid (V''» V',∆VS≈V''= RT/P), then Eqs. (5) and (6) become: $$\Delta S_{vap}=\frac{RT^{2}}{P}\frac{dP_{S}}{dT}$$and $$\Delta S_{vap}=\frac{RT}{P}\frac{dP_{S}}{dT}$$ The isobaric heat capacity may be calculated using the following equation: $$\Delta C_{P}=2RT \frac{dlnP_{S}}{dT}+RT^{2} \frac{d^{2}lnP_{S}}{dT^{2}}$$ where vapor-pressure temperature derivative $\frac{dP_{S}}{dT}$ was calculated using Eqs. (3) and (4). The derived values of the thermodynamic function for all the studied geothermal samples are presented in Table 8 and in Figure 6 and Figure 7. Table 8 Derived thermodynamic properties of the geothermal fluids. T /K ∆Hvap / kJ∙mol-1 ∆Svap / kJ∙K-1∙mol-1 ∆Cp / Figure 6 Derived values of enthalpy (left) and entropy (right) of vaporization for geothermal fluids as a function of temperature together with pure water values calculated from IAPWS formulation [48]. 1-Kirchenstollen; 2-Friedrichstollen; and 3-Murquelle; Dashed line is pure water [48]. Figure 7 Derived values of heat capacity of geothermal fluids as a function of temperature. 1-Kirchenstollen; 2,3-Friedrichstollen and Murquelle. In the present study, the vapor-pressure of three natural geothermal fluid samples collected from the Baden-Baden geothermal field (Kirchenstollen, Friedrichstollen, and Murquelle, Germany) were measured over the temperature range of 274–413 K using two different methods and vapor-pressure apparatus. The study revealed that the measured vapor-pressures of the geothermal fluids were strongly dependent on the total salt contents (salinity), as well as on the nature of the chemical composition (ion species) and dissolved gases. The measured vapor-pressure values for the geothermal fluids were higher than those of pure water (IAPWS standard data), by a factor of 5.5%–25.4% for the Kirchenstollen sample, 3.0%–11.4% for the Friedrichstollen sample, and 5.3%–14.8% for the Murquelle sample, depending on the measurement temperature. The maximum deviations (in the range of 11%–25%) were observed at low temperatures (approximately 277 K), while at high temperatures, the deviations were within the range of 3.0%–5.5%. This could be attributed to the effect of soluble gases in the geothermal fluids. The soluble gases exerted a strong effect on the measured vapor-pressure of the geothermal fluids. Large differences in the range of 2.6%–18.7% were obtained between the vapor-pressure data for the Kirchenstollen and Friedrichstollen samples, while the data for Kirchenstollen deviated from those of Murquelle by 0.1%–14%. The relatively low difference, ranging from 2.5% to 3.9% was obtained between the Friedrichstollen and Murquelle geothermal fluid samples. It was determined experimentally that different types of ion species exerted different effects on vapor pressure. For instance, Cl-, SO4-2 and Na+ ions were observed to exert opposite effects on vapor pressure, i.e., at a constant temperature, Cl- was observed to be increasing the vapor-pressure, while Na+ was decreasing it. In general, anions were observed to be increasing, while cations were observed to be decreasing the vapor-pressure. The rate,$\left ( \partial P_{S}/\partial c_{i} \right )_{T,C_{j\neq i}}$, of these increases or decreases was different and depended strongly on the chemical nature of the ion species. For instance, the same concentrations of Cl- and Na+ ions exerted different effects on the measured vapor-pressure; at a temperature of 413 K, the value of the $\left ( \partial P_{S}/\partial c_{i} \right )_{T,C_{j\neq i}}$ for Cl- was 57.2 kPa/(g/L), while that for Na+ ions was -350.02 kPa/(g/L). A Riedel-type model for the prediction of vapor-pressures for various concentrations of ion species and temperatures ranging from 274 to 413 K was developed using the measurement data obtained in the experiments. Riedel's characteristic constant of the ions was estimated for each ion species. The contributions of the basic ion species in the geothermal fluids (Na+, Ca+, S+, Si+, K+, SO4-2, and Cl-) to the total experimentally-observed values of vapor-pressure were also estimated. The measured vapor-pressure data were utilized to develop Antoine and Wagner-type correlation models. Using the measured vapor-pressure data, values of the key derived thermodynamic properties of the geothermal fluid samples (enthalpy and entropy of vaporizations and the heat capacity) were calculated as a function of temperature. The developed models reproduced the measured values of vapor-pressure for the geothermal fluids, with AAD=0.01%–0.03%, St. Dev=0.01%–0.09%, and Max. Dev=0.04%–0.37%. Ilmutdin Abdulagatov coordinated and developed the research phases and the manuscript. Misirkhan Talybov performed the measurements of the vapor-pressure for the geothermal fluid samples. Lala Azizova interpreted and evaluated the experimental data, developed vapor-pressure correlation models. Paschen H, Oertel D, Grünwald R. Möglichkeiten geothermischer Stromerzeugung in Deutschland. Berlin: Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB); 2003. Kaltschmitt M, Streicher W, Wiese A. Erneuerbare Energien, 4. Aktualisierte und ergänzte Auflage. Berlin: Springer Verlag; 2006. Haas JL. Physical properties of the coexisting phases and thermochemical properties of the H2O component in boiling NaCl solutions. Washington: Superintendent of Documents, U.S. Government Printing Office; 1976. Sanner B. Baden-Baden a famous thermal spa with a long history. GHC Bull. 2000; 9: 16-22. Vetter C. Geothermal Simulation- GESI, Programmbeschreibung und Validierung. Karlsruher Institut für Technologie Institut für Kern- und Energietechnik, Karlsruher Institut für Technologie; 2014. Vetter C, Wiemer HJ, Kuhn D. Comparison of sub- and supercritical Organic Rankine Cycles for power generation from low-temperature/low-enthalpy geothermal wells, considering specific net power output and efficiency. Appl Ther Eng. 2013; 51: 871-879. [CrossRef] Schrüder E, Neumaier K, Nagel F, Vetter C. Study on heat transfer in heat exchangers for a new supercritical organic rankine cycle. Heat Transfer Eng. 2014; 35: 1505-1519. [CrossRef] Stefánsson A, Driesner T, Bénézeth P. Thermodynamic of geothermal fluids. Mineralogical Society of America & Geochemical Society; 2012. 76 p. Reindl J, Shen H, Bisiar T. Reservoir engineering: An introduction and application to rico. Colorado, Geothermal Energy-MNGN598; 2009. Schröder E, Thomauske K, Schmalzbauer J, Herberger S, Gebert C, Velevska M. Design and test of a new calorimeter for online detection of geothermal water heat capacity. Geothermics. 2015; 53: 202-212. [CrossRef] Herfurth S, Schröder E, Thomauske K, Kuhn D. Messung physikalischer thermalwassereigenschaften unter In-situ- Bedingungen. Essen: Deutscher Geothermiekongress; 2015. Schröder E, Thomauske K, Schmalzbauer J, Herberger S. Measuring techniques for in situ measurements of thermodynamic properties of geothermal water. Melbourne: Proc. World Geothermal Congress 2015; 2015. Birner J. Hydrogeological model of the Malm aquifer in the South German Molasse Basin. Berlin: der Freien Universität Berlin; 2013. Stober I. Auswirkungen der physikalischen Eigenschaften von Tiefenwüssern auf die geothermische Leistung von Geothermieanlagen und die Aquiferparameter. Z GeolWiss. 2013; 41: 9-20. Walsh SDC. Nagasree G, Allan MML, Martin OS. Calculating thermophysical fluid properties during geothermal energy production with NESS and Reaktoro. Geothermics. 2017; 70: 146-154. [CrossRef] Bourcier WL, Lin M, Nix G. Recovery of minerals and metals from geothermal fluids, USRL-CONF-215135. OH, United States: 2003 SME Annual Meeting Cincinnati; 2003. Francke H, Thorade M. Density and viscosity of brine: An overview from a process engineers perspective. Chem Erde Geochem. 2010; 70: 23-32. Rogers PSZ, Pitzer KS. Volumetric properties of aqueous sodium chloride solutions. J Phys Chem Ref Data. 1982; 11: 15-81. [CrossRef] Zezin D, Driesner T, Sanchez-Valle C. Volumetric properties of mixed electrolyte aqueous solutions at elevated temperatures and pressures. The systems CaCl2-NaCl-H2O and MgCl-NaCl-H2O to 523.15, 70 MPa, and ionic strength from (0.1 to 18) mol∙kg-1. J Chem Eng Data. 2014; 60: 1181-1192. [CrossRef] Anderson G, Probst A, Murray L, Butler S. An accurate PVT model for geothermal fluids as represented by H2O+NaCl+CO2 mixtures. Stanford: Proc. 17 th Workshop on Geothermal Reservoir Engineering; 1992. McKibbin R, McNabb A. Mathematical modeling the phase boundaries and fluid properties of the system H2O+NaCl+CO2. Auckland: Proc. 17th New Zealand Geothermal Workshop; 1995. Millero FJ, Huang F. The density of seawater as a function of salinity (5 to 70 g kg-1) and temperature (273.15 to 363.15 K). Ocean Sci. 2009; 5: 91-100. [CrossRef] Ostermann RD, Paranjpe SG, Godbole SP, Kamath VA. The effect of dissolved gas on geothermal brine viscosity. California: Proc 56th Ann Soc Petrol Eng California Regional Meeting; 1986. Abdulagatov IM, Akhmedova-AzizovaL A, Aliev RM, Badavov GB. Measurements of the density, speed of sound, viscosity and derived thermodynamic properties of geothermal fluids. J Chem Eng Data. 2016; 61:234-246. [CrossRef] Abdulagatov IM, Akhmedova-Azizova LA, Aliev RM, Badavov GB, Measurements of the density, speed of sound, viscosity and derived thermodynamic properties of geothermal fluids. Part II. Appl Geochem. 2016; 69: 28-41. [CrossRef] Talibov MA, Safarov JT, Hassel ER, Abdulagatov IM. High-pressure and high-temperature density and vapor-pressure measurements and derived thermodynamic properties of natural waters of Yardymli district of Azerbaijan. High Temp High Pres. 2018; 47: 223-255. Abdulagatov IM, Dvoryanchikov VI. Thermodynamic properties of geothermal fluids. Russian J Geochem. 1995; 5: 612-620. Palliser Ch, McKibbin R. A Model for deep geothermal brines, II: Thermodynamic properties-density. Transport Porous Med. 1998; 33: 129-154. [CrossRef] Palliser Ch. A model for deep geothermal brines: State space description and thermodynamic properties. Auckland: Massey University; 1998. Dittman GL. Calculation of brine properties, Lawrence Livermore Laboratory, Report UCID 17406; 1977. Potter RW, Haas JL. A model for the calculation of the thermodynamic properties of geothermal fluids. Geoth Resour Council. 1977; 1: 243-244. Dolejs D, Manning CE. Thermodynamic model for mineral solubility in aqueous fluids: theory, calibration and application to model fluid-flow systems. Geofluids. 2010; 10: 20-40. Palliser Ch, McKibbin R. A Model for deep geothermal brines, III: Thermodynamic properties-enthalpy and viscosity. Transport Porous Med. 1998; 33: 155-171. [CrossRef] Alkan H, Babadagli T, Satman A. The prediction of the PVT/Phase behavior of the geothermal fluid mixtures. Proceeding of the World Geothermal Congress; 1995. 1659-1665 p. Champel B. Discrepancies in brine density databases at geothermal conditions. Geothermics. 2006; 35: 600-606. [CrossRef] Spycher N, Pruess K. A model for thermo physical properties of CO2-brine mixtures at elevated temperatures and pressures. Stanford: Proc. 36th Workshop on Geothermal Reservoir Engineering; 2011. Wahl EF. Geothermal energy utilization. New York: Wiley; 1977. Riedel L. The heat conductivity of aqueous solutions of strong electrolytes. Chem Ing Tech. 1951; 23: 59-64. [CrossRef] Horvath AL. Handbook of aqueous electrolyte solutions. Ellis Horwood: Physical Properties, Estimation Methods and Correlation Methods; 1985. Aseyev GG, Zaytsev ID. Estimation methods and experimental data. New-York: Begell-House; 1996. Aseyev GG. Methods for calculation of the multicomponent systems and experimental data on thermal conductivity and surface tension. New York: Begell-House; 1998. Abdulagatov IM, Assael M. Viscosity. Hydrothermal properties of materials. London: John Wiley & Sons; 2009. 249-270 p. Agemar T, Alten JA, Ganz B, Kuder J, Kьhne K, Schumacher S, et al. The geothermal information system for Germany-GeotIS. Z Dtsch Ges Geowiss. 2014; 165: 129-144. [CrossRef] Suchi E, Dittmann J, Knopf S, Müller C, Schulz R. Geothermal Atlas to visualize potential conflicts of interest between CO2 storage (CCS) and deep geothermal energy in Germany. Z Dtsch Ges Geowiss; 2014. Ganz B, Schellschmidt R, Schulz R, Sanner B. Geothermal energy use in Germany. Pisa,Italy: EuropeanGeothermalCongress;2013. Helgeson HC. Solution chemistry and metamorphism. Res Geochem. 1967; 55: 379-385. Lemmon EW, Bell IH, Huber ML, Mc Linden MO. NIST Standard Reference Database 23, NIST Reference Fluid Thermodynamic and Transport Properties, REFPROP, version 10.0, Standard Reference Data Program, National Institute of Standards and Technology: Gaithersburg, MD; 2018. Wagner W, Pruß A. New international formulation for the thermodynamic properties of ordinary water substance for general and scientific use. J Phys Chem Ref Data. 2002; 31: 387-535. [CrossRef] Haar L, Gallagher TS, Kell GS. NBS/NRS steam tables: Thermodynamic and transport properties and computer programs for vapor and liquid states in SI units. Washington: Hemisphere; 1984. 120 p. Fabuss BM, Korosi A. Vapor pressures of binary aqueous solutions of NaCl, KCl, Na2SO4, and MgSO4 at concentrations and temperatures of interest in desalination processes. Desalination. 1966; 1: 139-148. Nie N, Zheng D, Dong L, Li Y. Thermodynamic properties of the water + 1-(2-Hydroxylethyl)- 3-methylimidazolium chloride system. J Chem Eng Data. 2012; 57: 3598-3603. [CrossRef] AbdulagatovI M, Magomedov UM. Thermal conductivity of aqueous ZnCl2solutions at high temperatures and high pressures. Ind Eng Chem Res. 1998; 37: 4883-4888. [CrossRef] Abdulagatov IM, Akhmedova-Azizova LA, Azizov ND. Thermal conductivity of binary aqueous NaBr and KBr and ternary H2O+NaBr+KBr solutions at temperatures from 294 to 577 K and pressures up to 40 MPa. J Chem Eng Data. 2004; 49: 1727-1737. [CrossRef] Abdulagatov IM, Akhmedova-Azizova LA. Thermal conductivity of aqueous CaCl2 solutions at high temperatures and high pressures. J Sol Chemistry. 2014; 43: 421-444. [CrossRef] Abdulagatov IM, Azizov ND, Zeinalova AB. Density, apparent and partial molar volumes, and viscosity of aqueous Na2CO3 solutions at high temperatures and high pressures. Z Phys Chem. 2007; 221: 963-1000. [CrossRef] Abdulagatov IM, Azizov ND. PVTx Measurements and partial molar volumes for aqueous li2so4 solutions at temperatures from 297 to 573 k and pressures up to 40 MPa. Int J Thermophys. 2003; 24: 1581-1610. [CrossRef] Abdulagatov IM, Azizov ND. Thermal conductivity and viscosity of the aqueous K2SO4 solutions at temperatures from 298 to 573 K and at pressures up to 30 MPa. Int J Thermophys. 2005; 26: 593-635. [CrossRef] Abdulagatov IM, Azizov ND. Viscosity of aqueous CaCl2solutions at high temperatures and high pressures. Fluid Phase Equilib. 2006; 240: 204-219. [CrossRef] Abdulagatov IM, Azizov ND. Densities, apparent and partial molar volumes of concentrated aqueous LiCl solutions at high temperatures and high pressures. Chem Geology. 2006; 230: 22-41. [CrossRef] Abdulagatov IM, Zeinalova AB, Azizov ND. Viscosity of aqueous Na2SO4 solutions at temperatures from 298 to 573 K and at pressures up to 40 MPa. Fluid Phase Equilib. 2005; 227: 57-70. Abdulagatov IM, Azizov ND, Zeinalova AB. Viscosities, densities, apparent and partial molar volumes of concentrated aqueous MgSO4 solutions at high temperatures and high pressures. Phys Chem Liq. 2007; 45: 127-148. [CrossRef] Wagner W, Saul A. Proc 10th International Conference on the Properties of Steam. Moscow: Mir; 1986. 199 p. Lemmon EW, Goodwin ARH. Critical Properties and Vapor Pressure Equation for Alkanes CnH2n+2: Normal Alkanes with n <= 36 and Isomers for n = 4 Through n = 9. J Phys Chem Ref Data. 2000; 29: 1-39. [CrossRef]
CommonCrawl
When are modules and representations not the same thing? I've been trying for a while to get a real concrete handle on the relationship between representations and modules. To frame the question, I'll put here the standard situation I have in mind: A ring $R$ lives in the category Ab of Abelian groups as an internal monoid $(\mu_R, \eta_R)$. A module is then just an Abelian group $A$ and a map $m : R \otimes A \rightarrow A$ that commutes with the monoid structure in the way you'd expect. Alternatively, take an Abelian group $A$ and look at its group of endomorphisms $[A,A]$. This has an internal monoid $(\mu_A, \eta_A)$ just taking composition and identity. Then a representation is just a monoid homomorphism $(R, \mu_R, \eta_R) \rightarrow (A, \mu_A, \eta_A)$ in Ab. I.e. a ring homomorphism. But then, Ab is monoidal closed, so these are the same concept under the iso $$\hom(R\otimes A, A) \cong \hom(R, [A,A])$$ This idea seems to work for any closed category where one wants to relate a multiplication to composition. So, my question is, since these things are isomorphic in such a general context, why are they taught as two separate concepts? Is it merely pedagogical, or are there useful examples where modules and representations are distinct? rt.representation-theory modules ct.category-theory ra.rings-and-algebras Aleks KissingerAleks Kissinger $\begingroup$ I have never seen the words "module" and "representation" clearly distinguished. My internal stylist says "module" when $R$ is acting on an abelian group, and "representation" when the action, especially by something not a ring, but rather a group, etc., is acting on a vector space. Eventually you run into trouble with some definitions. For example, it's very hard to say when a map $G\to\operatorname{hom}(V,V)$ is algebraic if $G$ is an algebraic group and $V$ is $\infty$-dimensional. So there the only valid definition of "module" is a map $V\to k[G]\otimes V$. $\endgroup$ – Theo Johnson-Freyd Jul 20 '10 at 17:44 $\begingroup$ Theo, you are correct: "rational representations" of algebraic groups are, in fact, modules. However, the situation in Lie theory is the opposite, i.e. there is a good theory of representations (with some intricate points), but while some modules have been considered, they do not entirely capture it, let alone define it. $\endgroup$ – Victor Protsak Jul 20 '10 at 19:30 $\begingroup$ These and other answers/comments are good reminders of the linguistic subtleties here. There is a related usage issue: I tend to use irreducible and completely reducible for representations, simple and semisimple for modules. But people do mix the terms, and it's hard to say what is "correct". $\endgroup$ – Jim Humphreys Jul 20 '10 at 19:55 Here is my representation theorist's perspective: the key difference between representations and modules is that representations are "non-linear", whereas modules are "linear". I'll concentrate on the case of groups as the most familiar, but this applies more generally. As Greg has already mentioned, in the most general sense, a representation is a homomorphism $f:G\to H,$ and usually there is no linear (or additive) structure on $H$, i.e. the set $f(g)$ need not be closed under sums; in fact, if $H$ is a non-abelian group, e.g. the symmetric group, the notion of sum doesn't even make sense (if $H=GL(V)$ then we may view its elements as endomorphisms of $V$ and add them, but this is unnatural since, by definition, $f$ is compatible with multiplicative structure). By contrast, a module involves a linear action $G\times V\to V,$ which is then "completed" by allowing arbitrary linear combinations, leading to certain technical advantages. Here is an example of a construction that is very useful and makes perfect sense module-theoretically, but not representation-theoretically: change of scalars. Given a module $M$ over a group ring $R[G]$ and a commutative ring homomorphism $R\to S,$ one gets a module $S\otimes_R M$ over the group ring $S[G]$. Common examples involve extensions of scalars (e.g. from $\mathbb{R}$ to $\mathbb{C}$, from a field $K$ of definition to the splitting field, from $\mathbb{Z}$ to $\mathbb{Z}_p$) and, more to the point, reductions (e.g. from $\mathbb{Z}$ or $\mathbb{Z}_p$ to $\mathbb{Z}/p\mathbb{Z}$). The module language is, predictably, also very useful in providing categorical descriptions of various operations on representations, such as functors of induction and restriction, $$Ind_H^G: H\text{-mod}\to G\text{-mod}\ \text{ and }\ Res_H^G: G\text{-mod}\to H\text{-mod},$$ where $H$ is a subgroup of $G,$ or the monoidal structure on $G$-mod. Finally, here are two illustrations of the complementary nature of the two approaches besides the group case, in linear algebra. A single linear transformation $T:V\to V$ on a finite-dimensional vector space $V$ over $K$ is most naturally viewed as a representation (no additive structure); in this case, it's a representation of the quiver with a single vertex and a single loop. From this point of view, classification up to isomorphism is a problem about conjugacy classes of linear transformations, $$T\to gTg^{-1},\ g\in GL(V).$$ By contrast, in the module style description we associate with $T$ a module over the ring $K[x]$ of polynomials in one variable over $K$ and classification problem reduces to the structure of modules over $K[x]$, which is a PID, with all the usual consequences. (Here the module picture is more illuminating.) If we consider a linear operator $S:V\to W$ between two different vector spaces, $$S\to hSg^{-1},\ g\in GL(V),\ h\in GL(W),$$ and a classification up to isomorphism is accomplished by row and column reduction. The corresponding quiver $\circ\to\circ$ is a single arrow connecting two distinct vertices, but its path algebra is less familiar. (Here the representation theory picture is more illuminating.) Victor ProtsakVictor Protsak To expand on Tom's answer, the word "representation" is a 19th century word that originally meant "group homomorphism". If $f:G \to H$ is a homomorphism from a group $G$ to a group $H$, then $f(g)$ "represents" the element $g$. $H$ is taken to be a "familiar" or "explicit" group, usually a matrix group but also sometimes a permutation group. The word "module" is a 20th century word (I think) that means "generalized vector space". As has been pointed out, a representation of a group $G$ is equivalent to a $k[G]$-module. These days the terms are largely interchangeable; you can also talk about a representation of an algebra instead of a group. Certainly you can add topology to the conditions, for instance by using the group $C^*$-algebra of a locally compact topological group. To the extent that there is still a useful distinction, there is a difference in emphasis. If a ring $R$ (or a group or whatever) acts on an abelian group $A$, and you consider its action to be a low-level structure, analogous to multiplying a vector by a scalar, then you should call $A$ an $R$-module. On the other hand, if you think of the action as a high-level geometric effect, analogous to a group acting on a manifold, then you should call it a representation. If you don't care, then you can use either term or both and it's all cool AFAIK :-). Possibly the word "module" is slowly supplanting the word "representation", because it's a shorter word as well as more modern and more general. Greg KuperbergGreg Kuperberg $\begingroup$ I like the explanation that the key difference in these two terms is emphasis. How far does this analogy go? Would it be fair to say for instance, modules of the enveloping algebra of sl(2) are a bit like vector spaces, but with some pretty beefy scalars? The primary concern here are the symmetries put on the space by the action of sl(2). I would say maybe yes. In plain old vector spaces, the scalar action is an important symmetry, so much so that we like to talk about identities "up to a scalar", etc. $\endgroup$ – Aleks Kissinger Jul 20 '10 at 15:59 $\begingroup$ A purely historical/etymological remark, possibly wrong (maybe someone can confirm, correct, or deny this): I believe that long before anybody talked about $R$-modules (like maybe late 19th century) certain abelian groups were being called modules. Maybe what we now would call a lattice in a real vector space $V$ -- something which has an $\mathbb R$-basis of $V$ as a $\mathbb Z$-basis? Perhaps with $V=\mathbb C$? Is the term "module" in algebra distantly related to the terms "modular group" and "modular form" by this route? $\endgroup$ – Tom Goodwillie Jul 20 '10 at 16:24 $\begingroup$ @Aleks: Sure, that's a valid viewpoint. The sort of thing that blurs the distinction between beefy scalars and puny scalars, even if you are conservative, is a module over (say) a polynomial ring. And even for a simple Lie algebra, the action of the Cartan subalgebra is very module-like, since in the integrable case it amounts to a graded vector space structure. $\endgroup$ – Greg Kuperberg Jul 20 '10 at 17:34 $\begingroup$ @Tom: I looked in early papers in JSTOR and Google Books. In English, a few authors up to the 1930s used the word "module" on an ad hoc basis to sort-of mean an abelian group, for instance E.T. Bell and Harald Bohr. Suddenly in 1938 there are algebra papers by Nakayama that read like papers written today, with left and right modules of algebras, quotient modules, etc. Nakayama had read a German algebra book. It appears that the whole package of definitions came from the Gottingen school. $\endgroup$ – Greg Kuperberg Jul 20 '10 at 17:48 $\begingroup$ Aleks: 1. In the interest of sanity, the word "scalar" should never be applied to non-commuting quantities. 2. The representation-module language use for Lie algebras mirrors the case of groups: a representation of $sl_2$ is a module over its universal enveloping algebra $U(sl_2).$ From the category theory point of view, a better formulation is that there is an equivalence of categories between {representations of $\mathfrak{g}$} and {$U(\mathfrak{g})$-modules}. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:55 It is certainly true that the category of representations of a group $G$ over a field $k$ is equivalent to the category of modules for the group ring $k[G]$, and it is often productive to rephrase questions about representations about questions about modules. Below, I give some examples of structure which is easier to discuss in terms of representations. But, as I will indicate, it is usually possible to rephrase in terms of modules with enough effort. Tensor products: If $V$ and $W$ are two representations of $G$, then $V \otimes W$ has a natural structure as a $G$-representation. For $k[G]$ modules, this is not true; the tensor product has to be added as additional structure on the category $k[G]$-rep. Here is an explicit example: Let $G=\mathbb{Z}/4$ and let $H = \mathbb{Z}/2 \times \mathbb{Z}/2$. Then $\mathbb{C}[G]$ and $\mathbb{C}[H]$ are isomrphic rings, but the tensor structures on $\mathbb{C}[G]$-modules and $\mathbb{C}[H]$-modules are inequivalent. The same issue exists with duals. People who like rings better than groups would say that the issue is that I am talking about the algebra structure of $k[G]$ when I should be talking about the Hopf algebra structure. Topology: Suppose that $G$ is a topological group (maybe a Lie group) and $k$ a toplogical field (maybe $\mathbb{R}$). Then a continuous representation of $G$ is a map $G \times V \to V$ which is a group action, $k$-linear, and continuous. I imagine there is a way to put a topology on $k[G]$ so that a continuous representation is a $k[G]$-module such that $k[G] \times V \to V$ is continuous, but I haven't seen it. And this will get worse with adjectives like smooth, algebraic, ... David E SpeyerDavid E Speyer $\begingroup$ Concerning your last sentence ('this will get worse with adjectives like ... algebraic'), "algebraic" representations of an affine group scheme $G$ over a field $k$ are actually co-modules for the Hopf algebra of regular functions $k[G]$. This means that the action of $G$ on the representation space $V$ is defined by a mapping $V \to k[G] \otimes V$ satisfying some natural diagrams. When $G$ is smooth and $k$ is alg. closed, one can of course view $V$ as a module for the group algebra of the "abstract" group $G(k)$, but as you point out, it isn't clear this is useful. $\endgroup$ – George McNinch Jul 20 '10 at 15:48 $\begingroup$ TOPOLOGY: Representations of top. groups is probably a good foil for the mod/rep correspondence. For one thing Top fails to be cartesian closed, so the first crack at comparing these things by chasing the iso hom(GxV,V) \cong hom(G,[V,V]) fails unless V and G are nice spaces (compact Hausdorff or something). TENSORS: It seems the natural representation of G on V (x) W already (implicitly) uses the comultiplication from the Hopf algebra structure of k[G], i.e. the induced representation is "(psi (x) phi) o delta" ...where delta is the linear map that copies the basis elements of k[G]. $\endgroup$ – Aleks Kissinger Jul 20 '10 at 15:48 $\begingroup$ Note: positive-dimensional vector spaces over non-discrete locally compact fields are never compact. There are some genuine difficulties in coming up with the correct notion of a group algebra in the topological setting that will faithfully capture representation-theoretic picture (see Kirillov's "Elements of representation theory"). Many of them concern a good category to which it should belong. The reduced $C^*$-algebra of an infinite discrete group is a good illustration. This cannot be fully computed even in easy cases. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:30 I would teach that an $R$-module is an abelian group $A$ plus a map of sets $R\times A\to A$ satisfying certain identities. I would probably also point out that this is the same thing as an abelian group $A$ plus a ring homomorphism $R\to End(A)$. I would also point out that "vector space" is the traditional term for "module" when $R$ is a field. Similarly, I would teach that an action of a group $G$ on a set $X$ is a map of sets $G\times X\to X$ satisfying certain identities, and I would probably also point out that that is the same as a group homomorphism $G\to Aut(X)$; and if it seemed appropriate for those students I would also say that this second point of view is useful for generalizing the idea so as to make groups act on things other than sets. An action of a group $G$ on a $k$-module is the same as a module for the group ring $kG$. You can also call this a representation of $G$ over $k$. This is not traditionally called a representation of $kG$. The fact that there are overlapping definitions is just historical accident. The word "module" was in use in special cases long before there was category theory, even before there was abstract ring theory as we know it. So was the word "representation". The fact that the two terms are both still used is not because somebody decided on a good reason to keep them both; they just survived, as words do. Tom GoodwillieTom Goodwillie $\begingroup$ "The word "module" was in use in special cases long before there was category theory, even before there was abstract ring theory as we know it" - do you have a reference in support of that? Note that van der Waerden's Modern algebra used "abelian group with operators" (I personally think that "module" is a great improvement in terminology). I am only aware of unrelated uses of the word "module" (actually, modulus, pl moduli) prior to the modern era. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:39 $\begingroup$ I meant essentially what Greg says in his comment after my comment in the thread of his answer to this question. Namely, "module" was a word sometimes used for an abelian group in some contexts. I don't have any evidence. I wonder how far back it goes. I heard Serre use the expression "group with operators" in the 70s, but he wasn't talking about abelian groups. $\endgroup$ – Tom Goodwillie Jul 20 '10 at 18:30 $\begingroup$ OK, this seems consistent with what I know, but note that the gap between Nakayama 1938 and Eilenberg-MacLane 1945 is less than 10 years. An interesting bit about Serre! French algebra nomenclature diverges from the English one in quite a few places. $\endgroup$ – Victor Protsak Jul 20 '10 at 19:38 $\begingroup$ Serre was lecturing to undergraduates on the Jordan-Hölder theorem, and mentioning a generalization. $\endgroup$ – Tom Goodwillie Jul 20 '10 at 20:35 Not the answer you're looking for? Browse other questions tagged rt.representation-theory modules ct.category-theory ra.rings-and-algebras or ask your own question. Why is a monoid with closed symmetric monoidal module category commutative? What is the opposite category of the category of modules (or Hopf algebra representations)? Which monoidal categories are equivalent to their centers? Direct construction of cocontinuous functors on Mod(A) Enriched monoidal categories Two definitions of modules in monoidal category When is/isn't the monoidal unit compact projective?
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Cosmological Simulations for Combined-Probe Analyses: Covariance and Neighbour-Exclusion Bias (1805.04511) J. Harnois-Deraps, A. Amon, A. Choi, V. Demchenko, C. Heymans, A. Kannawadi, R. Nakajima, E. Sirks, L. van Waerbeke, Yan-Chuan Cai, B. Giblin, H. Hildebrandt, H. Hoekstra, L. Miller, T. Troester May 11, 2018 astro-ph.CO We present a public suite of weak lensing mock data, extending the Scinet Light Cone Simulations (SLICS) to simulate cross-correlation analyses with different cosmological probes. These mocks include KiDS-450- and LSST-like lensing data, cosmic microwave background lensing maps and simulated spectroscopic surveys that emulate the GAMA, BOSS and 2dFLenS galaxy surveys. With 817 independent realisations, our mocks are optimised for combined-probe covariance estimation, which we illustrate for the case of a joint measurement involving cosmic shear, galaxy-galaxy lensing and galaxy clustering from KiDS-450 and BOSS data. With their high spatial resolution, the SLICS are also optimal for predicting the signal for novel lensing estimators, for the validation of analysis pipelines, and for testing a range of systematic effects such as the impact of neighbour-exclusion bias on the measured tomographic cosmic shear signal. For surveys like KiDS and DES, where the rejection of neighbouring galaxies occurs within ~2 arcseconds, we show that the measured cosmic shear signal will be biased low, but by less than a percent on the angular scales that are typically used in cosmic shear analyses. The amplitude of the neighbour-exclusion bias doubles in deeper, LSST-like data. The simulation products described in this paper are made available at http://slics.roe.ac.uk/. The third data release of the Kilo-Degree Survey and associated data products (1703.02991) J. T. A. de Jong, G. A. Verdoes Kleijn, T. Erben, H. Hildebrandt, K. Kuijken, G. Sikkema, M. Brescia, M. Bilicki, N. R. Napolitano, V. Amaro, K. G. Begeman, D. R. Boxhoorn, H. Buddelmeijer, S. Cavuoti, F. Getman, A. Grado, E. Helmich, Z. Huang, N. Irisarri, F. La Barbera, G. Longo, J. P. McFarland, R. Nakajima, M. Paolillo, E. Puddu, M. Radovich, A. Rifatto, C. Tortora, E. A. Valentijn, C. Vellucci, W-J. Vriend, A. Amon, C. Blake, A. Choi, I. Fenech Conti, R. Herbonnet, C. Heymans, H. Hoekstra, D. Klaes, J. Merten, L. Miller, P. Schneider, M. Viola May 21, 2017 astro-ph.CO, astro-ph.GA, astro-ph.IM The Kilo-Degree Survey (KiDS) is an ongoing optical wide-field imaging survey with the OmegaCAM camera at the VLT Survey Telescope. It aims to image 1500 square degrees in four filters (ugri). The core science driver is mapping the large-scale matter distribution in the Universe, using weak lensing shear and photometric redshift measurements. Further science cases include galaxy evolution, Milky Way structure, detection of high-redshift clusters, and finding rare sources such as strong lenses and quasars. Here we present the third public data release (DR3) and several associated data products, adding further area, homogenized photometric calibration, photometric redshifts and weak lensing shear measurements to the first two releases. A dedicated pipeline embedded in the Astro-WISE information system is used for the production of the main release. Modifications with respect to earlier releases are described in detail. Photometric redshifts have been derived using both Bayesian template fitting, and machine-learning techniques. For the weak lensing measurements, optimized procedures based on the THELI data reduction and lensfit shear measurement packages are used. In DR3 stacked ugri images, weight maps, masks, and source lists for 292 new survey tiles (~300 sq.deg) are made available. The multi-band catalogue, including homogenized photometry and photometric redshifts, covers the combined DR1, DR2 and DR3 footprint of 440 survey tiles (447 sq.deg). Limiting magnitudes are typically 24.3, 25.1, 24.9, 23.8 (5 sigma in a 2 arcsec aperture) in ugri, respectively, and the typical r-band PSF size is less than 0.7 arcsec. The photometric homogenization scheme ensures accurate colors and an absolute calibration stable to ~2% for gri and ~3% in u. Separately released are a weak lensing shear catalogue and photometric redshifts based on two different machine-learning techniques. KiDS-450: Cosmological parameter constraints from tomographic weak gravitational lensing (1606.05338) H. Hildebrandt, M. Viola, C. Heymans, S. Joudaki, K. Kuijken, C. Blake, T. Erben, B. Joachimi, D. Klaes, L. Miller, C.B. Morrison, R. Nakajima, G. Verdoes Kleijn, A. Amon, A. Choi, G. Covone, J.T.A. de Jong, A. Dvornik, I. Fenech Conti, A. Grado, J. Harnois-Déraps, R. Herbonnet, H. Hoekstra, F. Köhlinger, J. McFarland, A. Mead, J. Merten, N. Napolitano, J.A. Peacock, M. Radovich, P. Schneider, P. Simon, E.A. Valentijn, J.L. van den Busch, E. van Uitert, L. Van Waerbeke Oct. 28, 2016 astro-ph.CO We present cosmological parameter constraints from a tomographic weak gravitational lensing analysis of ~450deg$^2$ of imaging data from the Kilo Degree Survey (KiDS). For a flat $\Lambda$CDM cosmology with a prior on $H_0$ that encompasses the most recent direct measurements, we find $S_8\equiv\sigma_8\sqrt{\Omega_{\rm m}/0.3}=0.745\pm0.039$. This result is in good agreement with other low redshift probes of large scale structure, including recent cosmic shear results, along with pre-Planck cosmic microwave background constraints. A $2.3$-$\sigma$ tension in $S_8$ and `substantial discordance' in the full parameter space is found with respect to the Planck 2015 results. We use shear measurements for nearly 15 million galaxies, determined with a new improved `self-calibrating' version of $lens$fit validated using an extensive suite of image simulations. Four-band $ugri$ photometric redshifts are calibrated directly with deep spectroscopic surveys. The redshift calibration is confirmed using two independent techniques based on angular cross-correlations and the properties of the photometric redshift probability distributions. Our covariance matrix is determined using an analytical approach, verified numerically with large mock galaxy catalogues. We account for uncertainties in the modelling of intrinsic galaxy alignments and the impact of baryon feedback on the shape of the non-linear matter power spectrum, in addition to the small residual uncertainties in the shear and redshift calibration. The cosmology analysis was performed blind. Our high-level data products, including shear correlation functions, covariance matrices, redshift distributions, and Monte Carlo Markov Chains are available at http://kids.strw.leidenuniv.nl. RCSLenS: The Red Cluster Sequence Lensing Survey (1603.07722) H. Hildebrandt, A. Choi, C. Heymans, C. Blake, T. Erben, L. Miller, R. Nakajima, L. van Waerbeke, M. Viola, A. Buddendiek, J. Harnois-Déraps, A. Hojjati, B. Joachimi, S. Joudaki, T. D. Kitching, C. Wolf, S. Gwyn, N. Johnson, K. Kuijken, Z. Sheikhbahaee, A. Tudorica, H. K. C. Yee Aug. 29, 2016 astro-ph.CO, astro-ph.IM We present the Red-sequence Cluster Lensing Survey (RCSLenS), an application of the methods developed for the Canada France Hawaii Telescope Lensing Survey (CFHTLenS) to the ~785deg$^2$, multi-band imaging data of the Red-sequence Cluster Survey 2 (RCS2). This project represents the largest public, sub-arcsecond seeing, multi-band survey to date that is suited for weak gravitational lensing measurements. With a careful assessment of systematic errors in shape measurements and photometric redshifts we extend the use of this data set to allow cross-correlation analyses between weak lensing observables and other data sets. We describe the imaging data, the data reduction, masking, multi-colour photometry, photometric redshifts, shape measurements, tests for systematic errors, and a blinding scheme to allow for more objective measurements. In total we analyse 761 pointings with r-band coverage, which constitutes our lensing sample. Residual large-scale B-mode systematics prevent the use of this shear catalogue for cosmic shear science. The effective number density of lensing sources over an unmasked area of 571.7deg$^2$ and down to a magnitude limit of r~24.5 is 8.1 galaxies per arcmin$^2$ (weighted: 5.5 arcmin$^{-2}$) distributed over 14 patches on the sky. Photometric redshifts based on 4-band griz data are available for 513 pointings covering an unmasked area of 383.5 deg$^2$ We present weak lensing mass reconstructions of some example clusters as well as the full survey representing the largest areas that have been mapped in this way. All our data products are publicly available through CADC at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/rcslens/query.html in a format very similar to the CFHTLenS data release. RCSLenS: A new estimator for large-scale galaxy-matter correlations (1512.03625) A. Buddendiek, P. Schneider, H. Hildebrandt, C. Blake, A. Choi, T. Erben, C. Heymans, L. van Waerbeke, M. Viola, J. Harnois-Deraps, L. Koens, R. Nakajima Dec. 11, 2015 astro-ph.CO We present measurements of the galaxy bias $b$ and the galaxy-matter cross-correlation coefficient $r$ for the BOSS LOWZ luminous red galaxy sample. Using a new statistical weak lensing analysis of the Red Sequence Cluster Lensing Survey (RCSLenS) we find the bias properties of this sample to be higher than previously reported with $b=2.45^{+0.05}_{-0.05}$ and $r=1.64^{+0.17}_{-0.16}$ on scales between $3'$ and $20'$. We repeat the measurement for angular scales of $20'\leq \vartheta \leq70'$, which yields $b=2.39^{+0.07}_{-0.07}$ and $r=1.24^{+0.26}_{-0.25}$. This is the first application of a data compression analysis using a complete set of discrete estimators for galaxy-galaxy lensing and galaxy clustering. As cosmological data sets grow, our new method of data compression will become increasingly important in order to interpret joint weak lensing and galaxy clustering measurements and to estimate the data covariance. In future studies this formalism can be used as a tool to study the large-scale structure of the Universe to yield a precise determination of cosmological parameters. RCSLenS: Cosmic Distances from Weak Lensing (1512.03627) T. D. Kitching, M. Viola, H. Hildebrandt, A. Choi, T. Erben, D. G. Gilbank, C. Heymans, L. Miller, R. Nakajima, E. van Uitert In this paper we present results of applying the shear-ratio method to the RCSLenS data. The method takes the ratio of the mean of the weak lensing tangential shear signal about galaxy clusters, averaged over all clusters of the same redshift, in multiple background redshift bins. In taking a ratio the mass-dependency of the shear signal is cancelled-out leaving a statistic that is dependent on the geometric part of the lensing kernel only. We apply this method to 535 clusters and measure a cosmology-independent distance-redshift relation to redshifts z~1. In combination with Planck data the method lifts the degeneracies in the CMB measurements, resulting in cosmological parameter constraints of OmegaM=0.31 +/- 0.10 and w0 = -1.02 +/- 0.37, for a flat wCDM cosmology. Dark matter halo properties of GAMA galaxy groups from 100 square degrees of KiDS weak lensing data (1507.00735) M. Viola, M. Cacciato, M. Brouwer, K. Kuijken, H. Hoekstra, P. Norberg, A.S.G. Robotham, E. van Uitert, M. Alpaslan, I.K. Baldry, A. Choi, J.T.A. de Jong, S.P. Driver, T. Erben, A. Grado, Alister W. Graham, C. Heymans, H. Hildebrandt, A.M. Hopkins, N. Irisarri, B. Joachimi, J. Loveday, L. Miller, R. Nakajima, P. Schneider, C. Sifón, G. Verdoes Kleijn July 10, 2015 astro-ph.CO, astro-ph.GA The Kilo-Degree Survey (KiDS) is an optical wide-field survey designed to map the matter distribution in the Universe using weak gravitational lensing. In this paper, we use these data to measure the density profiles and masses of a sample of $\sim \mathrm{1400}$ spectroscopically identified galaxy groups and clusters from the Galaxy And Mass Assembly (GAMA) survey. We detect a highly significant signal (signal-to-noise-ratio $\sim$ 120), allowing us to study the properties of dark matter haloes over one and a half order of magnitude in mass, from $M \sim 10^{13}-10^{14.5} h^{-1}\mathrm{M_{\odot}}$. We interpret the results for various subsamples of groups using a halo model framework which accounts for the mis-centring of the Brightest Cluster Galaxy (used as the tracer of the group centre) with respect to the centre of the group's dark matter halo. We find that the density profiles of the haloes are well described by an NFW profile with concentrations that agree with predictions from numerical simulations. In addition, we constrain scaling relations between the mass and a number of observable group properties. We find that the mass scales with the total r-band luminosity as a power-law with slope $1.16 \pm 0.13$ (1-sigma) and with the group velocity dispersion as a power-law with slope $1.89 \pm 0.27$ (1-sigma). Finally, we demonstrate the potential of weak lensing studies of groups to discriminate between models of baryonic feedback at group scales by comparing our results with the predictions from the Cosmo-OverWhelmingly Large Simulations (Cosmo-OWLS) project, ruling out models without AGN feedback. Photometric redshift requirements for lens galaxies in galaxy-galaxy lensing analyses (1107.1395) R. Nakajima, R. Mandelbaum, U. Seljak, J. D. Cohn, R. Reyes, R. Cool Nov. 23, 2011 astro-ph.CO Weak gravitational lensing is a valuable probe of galaxy formation and cosmology. Here we quantify the effects of using photometric redshifts (photo-z) in galaxy-galaxy lensing, for both sources and lenses, both for the immediate goal of using galaxies with photo-z as lenses in the Sloan Digital Sky Survey (SDSS) and as a demonstration of methodology for large, upcoming weak lensing surveys that will by necessity be dominated by lens samples with photo-z. We calculate the bias in the lensing mass calibration as well as consequences for absolute magnitude (i.e., k-corrections) and stellar mass estimates, for a large sample of SDSS Data Release 8 (DR8) galaxies. The redshifts are obtained with the template based photo-z code ZEBRA on the SDSS DR8 ugriz photometry. We assemble and characterise the calibration samples (~9k spectroscopic redshifts from four surveys) to obtain photometric redshift errors and lensing biases corresponding to our full SDSS DR8 lens and source catalogues. Our tests of the calibration sample also highlight the impact of observing conditions in the imaging survey when the spectroscopic calibration covers a small fraction of its footprint; atypical imaging conditions in calibration fields can lead to incorrect conclusions regarding the photo-z of the full survey. For the SDSS DR8 catalogue, we find sigma_z/(1+z)=0.096 and 0.113 for the lens and source catalogues, with flux limits of r=21 and r=21.8, respectively. We also explore the systematic uncertainty in the lensing signal calibration when using source photo-z, and both lens and source photo-z; given the size of existing training samples, we can constrain the lensing signal calibration (and therefore the normalization of the surface mass density) to within 2 and 4 per cent, respectively. [ABRIDGED] Direct Confirmation of the Asymmetry of the Cas A Supernova with Light Echoes (1003.5660) A. Rest, R. J. Foley, B. Sinnott, D. L. Welch, C. Badenes, A. V. Filippenko, M. Bergmann, W. A. Bhatti, S. Blondin, P. Challis, G. Damke, H. Finley, M. E. Huber, D. Kasen, R. P. Kirshner, T. Matheson, P. Mazzali, D. Minniti, R. Nakajima, G. Narayan, K. Olsen, D. Sauer, R. C. Smith, N. B. Suntzeff Feb. 4, 2011 astro-ph.SR We report the first detection of asymmetry in a supernova (SN) photosphere based on SN light echo (LE) spectra of Cas A from the different perspectives of dust concentrations on its LE ellipsoid. New LEs are reported based on difference images, and optical spectra of these LEs are analyzed and compared. After properly accounting for the effects of finite dust-filament extent and inclination, we find one field where the He I and H alpha features are blueshifted by an additional ~4000 km/s relative to other spectra and to the spectra of the Type IIb SN 1993J. That same direction does not show any shift relative to other Cas A LE spectra in the Ca II near-infrared triplet feature. We compare the perspectives of the Cas A LE dust concentrations with recent three-dimensional modeling of the SN remnant (SNR) and note that the location having the blueshifted He I and H alpha features is roughly in the direction of an Fe-rich outflow and in the opposite direction of the motion of the compact object at the center of the SNR. We conclude that Cas A was an intrinsically asymmetric SN. Future LE spectroscopy of this object, and of other historical SNe, will provide additional insight into the connection of explosion mechanism to SN to SNR, as well as give crucial observational evidence regarding how stars explode. Improved Constraints on the Gravitational Lens Q0957+561. I. Weak Lensing (0903.4182) R. Nakajima, G.M. Bernstein, R. Fadely, C.R. Keeton, T. Schrabback April 8, 2009 astro-ph.CO Attempts to constrain the Hubble constant using the strong gravitational lens system Q0957+561 are limited by systematic uncertainties in the mass model, since the time delay is known very precisely. One important systematic effect is the mass sheet degeneracy, which arises because strong lens modeling cannot constrain the presence or absence of a uniform mass sheet $\kappa$, which rescales $H_0$ by the factor $(1-\kappa)$. In this paper we present new constraints on the mass sheet derived from a weak-lensing analysis of the Hubble Space Telescope imaging of a 6 arcmin square region surrounding the lensed quasar. The average mass sheet within a circular aperture (the strong lens model region) is constrained by integrating the tangential weak gravitational shear over the surrounding area. We find the average convergence within a $30"$ radius around the lens galaxy to be $\kappa(<30") = 0.166\pm0.056$ ($1\sigma$ confidence level), normalized to the quasar redshift. This includes contributions from both the lens galaxy and the surrounding cluster. We also constrain a few other low-order terms in the lens potential by applying a multipole aperture mass formalism to the gravitational shear in an annulus around the strong lensing region. Implications for strong lens models and the Hubble constant are discussed in an accompanying paper. Low-Energy Astrophysics: Stimulating the Reduction of Energy Consumption in the Next Decade (0903.3384) P.J. Marshall, N. Bennert, E.S. Rykoff, K.J. Shen, J.D.R. Steinfadt, J. Fregeau, R-R. Chary, K. Sheth, B. Weiner, K.B. Henisey, E.L. Quetin, R. Antonucci, D. Kaplan, P. Jonsson, M.W. Auger, C. Cardamone, T. Tao, D.E. Holz, M. Bradac, T.S. Metcalfe, S. McHugh, M. Elvis, B.J. Brewer, T. Urrutia, F. Guo, W. Hovest, R. Nakajima, B.-Q. For, D. Erb, D. Paneque March 19, 2009 astro-ph.IM In this paper we address the consumption of energy by astronomers while performing their professional duties. Although we find that astronomy uses a negligible fraction of the US energy budget, the rate at which energy is consumed by an average astronomer is similar to that of a typical high-flying businessperson. We review some of the ways in which astronomers are already acting to reduce their energy consumption. In the coming decades, all citizens will have to reduce their energy consumption to conserve fossil fuel reserves and to help avert a potentially catastrophic change in the Earth's climate. The challenges are the same for astronomers as they are for everyone: decreasing the distances we travel and investing in energy-efficient infrastructure. The high profile of astronomy in the media, and the great public interest in our field, can play a role in promoting energy-awareness to the wider population. Our specific recommendations are therefore to 1) reduce travel when possible, through efficient meeting organization, and by investing in high-bandwidth video conference facilities and virtual-world software, 2) create energy-efficient observatories, computing centers and workplaces, powered by sustainable energy resources, and 3) actively publicize these pursuits. Cosmological parameters from SDSS and WMAP (astro-ph/0310723) M. Tegmark, M. Strauss, M. Blanton, K. Abazajian, S. Dodelson, H. Sandvik, X. Wang, D. Weinberg, I. Zehavi, N. Bahcall, F. Hoyle, D. Schlegel, R. Scoccimarro, M. Vogeley, A. Berlind, T. Budavari, A. Connolly, D. Eisenstein, D. Finkbeiner, J. Frieman, J. Gunn, L. Hui, B. Jain, D. Johnston, S. Kent, H. Lin, R. Nakajima, R. Nichol, J. Ostriker, A. Pope, R. Scranton, U. Seljak, R. Sheth, A. Stebbins, A. Szalay, I. Szapudi, Y. Xu, et al. (the SDSS collaboration) Jan. 15, 2004 astro-ph, hep-th, hep-ph We measure cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data. Our results are consistent with a ``vanilla'' flat adiabatic Lambda-CDM model without tilt (n=1), running tilt, tensor modes or massive neutrinos. Adding SDSS information more than halves the WMAP-only error bars on some parameters, tightening 1 sigma constraints on the Hubble parameter from h~0.74+0.18-0.07 to h~0.70+0.04-0.03, on the matter density from Omega_m~0.25+/-0.10 to Omega_m~0.30+/-0.04 (1 sigma) and on neutrino masses from <11 eV to <0.6 eV (95%). SDSS helps even more when dropping prior assumptions about curvature, neutrinos, tensor modes and the equation of state. Our results are in substantial agreement with the joint analysis of WMAP and the 2dF Galaxy Redshift Survey, which is an impressive consistency check with independent redshift survey data and analysis techniques. In this paper, we place particular emphasis on clarifying the physical origin of the constraints, i.e., what we do and do not know when using different data sets and prior assumptions. For instance, dropping the assumption that space is perfectly flat, the WMAP-only constraint on the measured age of the Universe tightens from t0~16.3+2.3-1.8 Gyr to t0~14.1+1.0-0.9 Gyr by adding SDSS and SN Ia data. Including tensors, running tilt, neutrino mass and equation of state in the list of free parameters, many constraints are still quite weak, but future cosmological measurements from SDSS and other sources should allow these to be substantially tightened.
CommonCrawl
Earth, Planets and Space Paleomagnetic studies on single crystals separated from the middle Cretaceous Iritono granite Frontier letter Chie Kato ORCID: orcid.org/0000-0001-8603-37591,2, Masahiko Sato ORCID: orcid.org/0000-0002-2475-39423, Yuhji Yamamoto ORCID: orcid.org/0000-0001-9163-03394, Hideo Tsunakawa ORCID: orcid.org/0000-0002-6628-91642 & Joseph L. Kirschvink ORCID: orcid.org/0000-0001-9486-66895,6 Earth, Planets and Space volume 70, Article number: 176 (2018) Cite this article Investigations of superchrons are the key to understanding long-term changes of the geodynamo and the mantle's controlling role. Granitic rocks could be good recorders of deep-time geomagnetic field behavior, but paleomagnetic measurements on whole-rock granitic samples are often disturbed by alterations like weathering, and the presence of multi-domain magnetite. To avoid such difficulties and test the usefulness of single silicate crystal paleomagnetism, here we report rock-magnetic and paleomagnetic properties of single crystals and compare those to the host granitic rock. We studied individual zircon, quartz and plagioclase crystals separated from the middle Cretaceous Iritono granite, for which past studies have provided tight constraints on the paleomagnetism and paleointensity. The occurrence of magnetite was very low in zircon and quartz. On the other hand, the plagioclase crystals contained substantial amounts of fine-grained single-domain to pseudo-single-domain magnetite. Microscopic features and distinctive magnetic behavior of plagioclase crystals indicate that the magnetite inclusions were generated by exsolution. We therefore performed paleointensity experiments by the Tsunakawa–Shaw method on 17 plagioclase crystals. Nine samples passed the standard selection criteria for reliable paleointensity determinations, and the mean value obtained was consistent with the previously reported whole-rock paleointensity value. The virtual dipole moment was estimated to be higher than 8.9 ± 1.8 × 1022 Am2, suggesting that the time-averaged field strength during middle of the Cretaceous normal superchron was several times as large as compared to that of non-superchron periods. Single plagioclase crystals which have exsolved magnetite inclusions can be more suitable for identification of magnetic signals and interpretation of paleomagnetic records than the conventional whole-rock samples or other silicate grains. Long superchrons of constant geomagnetic polarity are the most distinctive features at the ~ 10 Myr scale trend of the geomagnetic field and are very possibly related to the whole-mantle convection process such as the activity of mantle plumes (e.g. Larson and Olson 1991; Glatzmaier et al. 1999; Courtillot and Olson 2007; Zhang and Zhong 2011; Biggin et al. 2012). Numerical dynamo simulations indicate that non-reversing stable dynamos with strong dipole moments will occur under conditions of relatively low CMB heat flow, whereas a reversing dynamo with multipolar nature is expected under conditions with high CMB heat flow (e.g. Kutzner and Christensen 2002; Christensen and Aubert 2006; Olson and Christensen 2006). Additionally, high dipole fields could be also caused by enhanced heterogeneity of CMB heat flow (e.g. Takahashi et al. 2008; Olson et al. 2010). Understanding the geomagnetic field intensity during superchrons is crucial for revealing the nature of the long-term change of the geodynamo and the controlling role of the mantle on it. While dynamo simulations predict stronger field during superchrons, paleointensity during the Cretaceous normal superchron (CNS) at 83–120 Ma has not reached consensus among previous paleomagnetic studies. Several studies suggest stronger field during CNS than the average for ages with frequent reversals (e.g. Tarduno et al. 2006; Tauxe 2006), while others claimed the opposite (e.g. Tanaka and Kono 2002; Shcherbakova et al. 2012). It is also challenging to get a solid conclusion from the paleomagnetic databases such as PINT (Biggin et al. 2009) and MagIC (earthref.org/MagIC) because the data deposited in them are mostly from volcanic rocks which reflect short-term geomagnetic variations and hide the long-term trends of paleointensity. In order to establish a reliable paleointensity curve for long-term variation, a brand new dataset based on appropriate samples and measurement methods is required. To focus on the long-term variations of the past geomagnetic field, plutonic rocks could provide good candidate samples since they are likely to record the time-averaged field accurately during their long cooling history. Granites in particular have been formed at various ages and have preserved over geological time. However, paleomagnetic studies using granitic rocks are usually difficult due to weathering of the rocks and non-ideality of coarse grain multi-domain (MD) magnetite. Also, granitic rocks often contain biotite and pyrrhotite, which decompose easily upon laboratory heating and form some magnetite as a byproduct. One of the most promising approaches to overcome these difficulties is to separate single silicate crystals from them that contain magnetic mineral inclusions, and use them for paleomagnetic measurements. Single silicate crystals with magnetic inclusions have the potential to yield reliable paleointensity data because the inclusions are more protected from chemical alternations such as oxic weathering in nature, and from thermochemical oxidation upon laboratory heating, than are the host granitic rocks. Zircon crystals have been used for paleointensity studies owing to its permanence against chemical alternation and ability to obtain direct radiometric ages on them (Tarduno et al. 2014, 2015), although Weiss et al. (2015) raised a controversy. Detailed rock-magnetic properties of zircons collected from river sand in the Tanzawa pluton, Japan, showed its adequacy for paleointensity measurements (Sato et al. 2015). Fu et al. (2017) reported that the absolute value of zircon paleointensity was consistent with the bulk rock using the Bishop Tuff of Northeastern California. Paleointensity and rock-magnetic properties have also been intensively studied on single plagioclase crystals which can contain magnetically stable fine-grained magnetite inclusions (Tarduno et al. 2006). For basalts from the 1955 Kilauea eruption, the recovered paleointensity has been compared with the whole-rock and magnetic observatory data, with good agreement (Cottrell and Tarduno 1999). Plagioclase crystals separated from lavas from the Rajmahal Traps (113–116 Ma; Tarduno et al. 2001), Strand Fiord Formation (95 Ma; Tarduno et al. 2002), Ocean Drilling Program (ODP) Site 1205 on Nintoku Seamount of the Hawaiian-Emperor volcanic chain and ODP Site 801 in the Pigafetta Basin (55.59 and 160 Ma, respectively; Tarduno and Cottrell 2005), and the Kiaman Reversed Superchron type area (~ 262–318 Ma; Cottrell et al. 2008) have been used for studies on paleointensity variation related to the reversal frequency of the dipole field. Quartz phenocrysts are also a target studied for Archean rocks (Tarduno et al. 2007, 2010, 2014). All of the paleointensity measurements mentioned above were taken using variants of the Thellier–Thellier method (Thellier and Thellier 1959; Coe 1967; Yu et al. 2004). Rock-magnetic properties of plagioclase separated from plutonic rocks such as granitoids (Usui et al. 2015) and gabbros (Feinberg et al. 2005; Muxworthy and Evans 2012) have also been reported. Some of them are characterized by needle-shaped tiny magnetite inclusions possibly formed by exsolution from the host plagioclase (Feinberg et al. 2005; Usui et al. 2015; Wenk et al. 2011). Several preceding studies reported paleointensity estimates using plutonic rocks in which the authors argued that the magnetic records were carried by exsolved magnetite (Selkin et al. 2008; Usui 2013). Plagioclase with exsolved magnetite is potentially an excellent recording medium of the ancient geomagnetic field, but should be treated carefully because (1) magnetic remanence anisotropy caused by needle-shaped magnetite can affect paleomagnetic results (Paterson 2013; Usui et al. 2015), (2) nonlinear thermoremanence acquisition (Selkin et al. 2007), and (3) unknown formation temperature of exsolved magnetite (Feinberg et al. 2005). Usui and Nakamura (2009) reported paleointensity using single plagioclase crystals separated from a granitic rock, although they did not claim they achieved exact, reliable estimates. Despite its potential for establishing the long-term trend of the geomagnetic field strength, paleointensity of single crystals separated from plutonic rocks have not been compared to that of the host whole rock to assess its reliability. This study aims to assess how paleointensity measurements on single silicate crystals separated from granitic rocks are reliable compared to those on whole-rock samples. We conducted systematic rock-magnetic measurements on zircon, quartz and plagioclase grains separated from whole-rock samples collected from the Cretaceous Iritono granite, a paleomagnetically well-studied unit in northeast Japan. The results suggest that plagioclase is the most appropriate candidate mineral for paleointensity measurements among the studied minerals. We therefore conducted paleointensity experiments on plagioclase and compared the results with previously published results from the host granitic rock. Paleointensity experiments were conducted by the Tsunakawa–Shaw method (Tsunakawa and Shaw 1994; Yamamoto et al. 2003; Mochizuki et al. 2004; Yamamoto and Tsunakawa 2005; Yamamoto et al. 2015) which might be more suitable for single grain samples with exsolved magnetite than the variants of the Thellier–Thellier method. Obtained paleointensity results were consistent with the whole-rock data; thus it is suggested that plagioclase crystals separated from granitic rock have a potential to constrain the long-term variation of paleointensity. We studied zircon, quartz and plagioclase separated from the middle Cretaceous Iritono granite in the Abukuma massif, northeast Japan (Fig. 1). Cooling history of the Iritono granite is constrained by Wakabayashi et al. (2006) and Tsunakawa et al. (2009) using a thermal diffusion model of the granite body, and by two radiometric age determinations with different closure temperatures. The U–Pb zircon age is 115.7 ± 1.9 Ma (Tsunakawa et al. 2009) and the 40Ar–39Ar biotite age is 101.9 ± 0.2 Ma (Wakabayashi et al. 2006). The age of the Iritono granite corresponds to the middle part of the CNS, in which the polarity reversals had been stopped for a period as long as 20 Myr. The estimated cooling time for a lock-in of a paleomagnetic record is 4 × 104 to 1.4 × 107 years. Wakabayashi et al. (2006) and Tsunakawa et al. (2009) previously conducted rock-magnetic and paleomagnetic studies and paleointensity experiments on the whole-rock samples of the Iritono granite. The magnetic minerals in the Iritono granite were magnetite and pyrrhotite, and their fraction varied with sampling locations. Samples from site ITG09 showed the least contribution of pyrrhotite, and the primary magnetization was clearly distinguished from the secondary magnetization carried by low blocking temperature or low coercivity components in terms of natural remanent magnetization (NRM) direction. Tsunakawa et al. (2009) studied the paleointensity by both the Coe's version of Thellier method (Thellier and Thellier 1959; Coe 1967) and the Tsunakawa–Shaw method using the whole-rock sample of site ITG09. Although the obtained paleointensities exhibit a bimodal distribution according to the different methods used, they were indistinguishable at the 2σ level and thus were combined into one site-mean. The resultant site-mean paleointensity was 58.4 ± 7.3 μT before applying the cooling rate correction, and 39.0 ± 4.9 μT after correction. This corresponds to a virtual dipole moment (VDM) of 9.1 ± 1.1 × 1022 Am2. The present study used the mineral samples separated from the core samples of site ITG09. Map of the Iritono granite in the Abukuma massif, northeast Japan (reproduced from Wakabayashi et al. 2006) A granite sample core 2.54 cm in diameter was crushed with a non-magnetic mortar and pestle, and sorted by 850 μm and 350 μm mesh screens. Heavy fractions of the sample smaller than 350 μm were concentrated by an aqueous panning technique. Zircons with no visible cracks or opaque particles on the surface were hand-picked under a binocular stereoscopic microscope. Quartz and plagioclase were hand-picked from samples larger than 350 μm and smaller than 850 μm. These selected crystals were leached by hydrochloric acid (HCl) to remove any tiny magnetic particles on the sample surface. HCl concentration and leaching duration was 12 N and 4 days for zircon and quartz, and 6 N and 8 h for plagioclase, respectively. Samples were then sandwiched individually between layers of magnetically clean Scotch Magic Transparent Tape in the method of Sato et al. (2015) or were mounted individually on a glass holder (see below) for rock-magnetic measurements and paleointensity experiments. Remanence measurements using SQUID magnetometer A superconducting quantum interference device (SQUID) magnetometer (2G Enterprises Model 755-4.2 cm) was used for remanence measurements. We followed the method of single-crystal measurements by Sato et al. (2015). A sample holder made of acrylonitrile butadiene styrene (ABS) was used for measurements. Single-crystal samples sandwiched by tape or mounted on the glass holder were fixed on the edge of the ABS holder by double-stick tape. The magnetic moments of the ABS holder and double-stick tape were measured before and after sample measurement and subtracted from the sample moment. The detection limit of the method was 2 × 10−12 Am2, so we employed 4 × 10−12 Am2 as a threshold to distinguish significant remanence intensity from noise. For stepwise thermal demagnetization (ThD) and paleointensity experiments, we made new thermally resistant holders for single-crystal measurement of the SQUID magnetometer based on the sample holder designed for SQUID microscope measurements (Fu et al. 2017). Images of the sample holder are shown in Fig. 2. Non-alkali high-temperature glass plates (Eagle XG, Corning, 1.1 mm thick) were cut into squares of 7 mm on a side. A 1-mm-diameter pit was drilled in the center of the glass plate, followed by intense cleaning in concentrated HCl. A single-crystal sample was put into the pit and fixed by stuffing SiO2 powder with grain size of ~ 0.8 μm. This technique enabled us to conduct heating experiments on single crystals in a fixed sample coordinate. The blank magnetic moment of the glass holder after subtracting the moment of ABS holder and double-stick tape was well below the practical detection limit of the SQUID magnetometer. Image of the glass holder fixed on the ABS holder. 7 mm each side. A plagioclase grain can be seen through the glass in the center First, we measured NRM intensity of 349, 455, and 268 grains for zircon, quartz, and plagioclase, respectively. On the basis of the NRM intensities, we then selected samples for further rock-magnetic and paleomagnetic measurements. Rock-magnetic measurements For the selected samples that showed significant NRM intensity (> 4 × 10−12 Am2 per grain), we conducted low-temperature remanence measurements using a magnetic property measurement system (Quantum Design model MPMS-XL5). Isothermal remanent magnetization (IRM) was first imparted at 2.5 T and 10 K after zero-field cooling from 300 K. The remanence was then measured during warming in zero-field (ZFC remanence). Subsequently, samples were cooled to 10 K in a 2.5 T field and then remanence was further measured during warming in zero-field (FC remanence). Hysteresis loop measurements were taken for plagioclase grains and a quartz grain which contained magnetite using an alternating gradient magnetometer (LakeShore model MicroMag 2900). Samples sandwiched by tape were mounted on a transducer probe with a silica sample stage (Lake Shore model P1 probe). The blank saturation magnetization of the probe was 6 × 10−10 Am2. Maximum field during hysteresis loop measurement was 0.5 T, and the field increment was 4 mT. Diamagnetic/paramagnetic corrections were applied to the obtained hysteresis loop by subtracting the average slopes at applied field of |B| > 300 mT. Results are exhibited in the Day plot (Day et al. 1977). Stepwise ThD of NRM was performed on selected zircon and quartz samples using a TDS-1 thermal demagnetizer (Natsuhara Giken). For plagioclase samples, stepwise ThD of laboratory-imparted thermoremanent magnetization (TRM) was performed after paleointensity experiments. TRM was given by cooling from 610 °C in a 50 μT field in air. To investigate the NRM to IRM (NRM/IRM) distribution of plagioclase, a room-temperature IRM was imparted to 75 plagioclase samples at 2 T by an MMPM10 pulse magnetizer (Magnetic Measurements), and the IRM intensity was measured using the SQUID magnetometer. Paleointensity experiments We performed paleointensity experiments with the Tsunakawa–Shaw method on 17 plagioclase grains. We followed the procedures described in Yamamoto and Tsunakawa (2005). In this method, stepwise alternating field demagnetization (AFD) of NRM and TRM is performed. Assuming the similarity of TRM and anhysteretic remanent magnetization (ARM), alternation caused by laboratory heating is monitored by comparing the coercivity spectra of ARMs before heating (ARMbefore) and after (ARMafter). TRM is corrected by: $$ {\text{TRM}}^{*} = {\text{TRM}} \times {\text{ARM}}_{\text{before}} /{\text{ARM}}_{\text{after}} $$ where TRM* is the corrected TRM. Paleointensity is determined using the slope in the TRM*–NRM diagram. The samples are heated twice. Assuming that thermal alternation in the first and second heating is similar, the validity of the ARM correction for alternation was checked by the second heating by comparing the measured intensity to the laboratory field. Samples are subjected to low-temperature demagnetization (LTD) before each stepwise AFD series to selectively demagnetize the unstable coarse grain magnetite. LTD treatment was conducted by cooling a sample in a dewar bottle inside a triple magnetically shielded case filled with liquid nitrogen for 5 min. TRM was given by cooling from 610 °C in a 50 μT field in a vacuum (< 10 Pa). The heating time at the top temperature of 610 °C was 10 (20) minutes for first (second) heating, with a subsequent cooling to room temperature with a rate of approximately 10 °C per minute. AFD treatment and ARM impartment was carried out using an alternating field demagnetizer (Natsuhara Giken model DEM-95C). AFD was conducted during sample tumbling. ARM was imparted at DC field of 50 μT, with a peak AC field of 180 mT. Here we define ARM0, ARM1 and ARM2 as the ARM imparted before heating, after first heating and after second heating, respectively. Also, TRM1 and TRM2 are given by the first and second heating, respectively. The corrected TRMs, TRM1* and TRM2* are given by TRM1 × ARM0/ARM1 and TRM2 × ARM1/ARM2, respectively. Paleointensity value is calculated by the slope of the NRM–TRM1* plot. The field intensity calculated from the slope of the TRM1–TRM2* plot is compared to the laboratory field intensity. We attempted to deal with the anisotropy effect on paleointensity by two experimental protocols. For four samples (sample IDs of 9004, 9009, 9013 and 9016), all ARMs and TRMs were given along the likely direction of the characteristic remanent magnetization (ChRM), estimated from the orthogonal plot of AFD of the NRM, so that the anisotropy bias was canceled. For the others we followed the standard protocol of the Tsunakawa–Shaw method in which ARM0 is approximately parallel to ChRM and ARM1 (ARM2) is parallel to TRM1 (TRM2). This protocol employs a built-in anisotropy correction using ARMs (Yamamoto et al. 2015); the anisotropy bias caused by angular difference between NRM (TRM1) and TRM1 (TRM2) is corrected by ratios of ARM0/ARM1 (ARM1/ARM2). In the present study, ARM1, ARM2, TRM1 and TRM2 were imparted along the Y axis, which is independent of the direction of ChRM. In this study, AFD steps for ARMs without LTD treatment (ARM00, ARM10, and ARM20) were omitted. Remanence anisotropy measurements To assess anisotropy effect, we measured the ARM anisotropy of 19 plagioclase samples including 13 samples which were subjected to the paleointensity experiments. ARM was imparted along three orthogonal axes (ARMx, ARMy, and ARMz) to obtain the remanence anisotropy tensor. Measurements were taken after LTD treatments. TRM anisotropy tensor was also measured after paleointensity experiments for some samples and the consistency with the ARM anisotropy tensor was checked. The ARM and TRM measurements were also taken after AFD with a peak AC field of 50 mT. ARM anisotropy of whole-rock samples was checked based on measurement results obtained using a spinner magnetometer (Natsuhara Giken model SMD88). Rock-magnetic properties of zircon Sixteen out of 349 zircon samples had NRM intensities larger than the threshold (Fig. 3a). Low-temperature magnetometry and stepwise ThD measurements of NRM were taken on selected samples that had significant NRM intensity. Representative results are summarized in Fig. 4. Stepwise ThD treatment for NRM was performed on four samples. Two samples showed a characteristic magnetization component and pyrrhotite-like blocking temperature (Fig. 4a), but the other two did not show any stable remanence component (Fig. 4b). Low-temperature experiments were performed on additional five samples. One sample showed a phase transition of pyrrhotite at ~ 30 K (Fig. 4c), while the other four samples did not show any obvious transition (Fig. 4d). We concluded that the dominant magnetic inclusion in zircon is pyrrhotite and/or magnetically very soft materials. Since the whole-rock study determined that a low blocking temperature (< 400 °C) component is probably carried by pyrrhotite and hence was most likely remagnetized by a reheating event (Wakabayashi et al. 2006), we did not use zircon for paleointensity experiments. Histogram of NRM intensity of a zircon, b quartz and c plagioclase. Dotted lines indicate the threshold for significant remanence. Inset figures in a and b show the histogram of NRM intensity above the threshold (4 × 10−12 Am2) Results of experiments on selected zircon. a, b Stepwise ThD of NRM. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. c, d Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements Rock-magnetic properties of quartz Similar to the zircon samples, very few samples of quartz (7 out of 455) had NRM intensities larger than the threshold (Fig. 3b). We took stepwise ThD measurements of NRM on two quartz grains. In both samples, magnetization decreased generally toward the origin in the orthogonal plot. One sample shows a high blocking temperature suggesting magnetite inclusion (Fig. 5a) and the other sample exhibits a lower, pyrrhotite-like blocking temperature (Fig. 5b). We further took low-temperature magnetometry measurements and hysteresis loop measurements on the former sample. The Verwey transition of magnetite was recognized near 120 K (Fig. 5c) that indicate titanium-poor magnetite (Özdemir et al. 1993; Moskowitz et al. 1998). High coercivity (Bc> 10 mT) exhibited in the slope-corrected hysteresis loop suggests the existence of fine-grained magnetite. We concluded that quartz is a potentially ideal sample for paleomagnetic study. However, we decided not to use quartz for paleointensity experiments because it was difficult to find enough magnetite bearing samples to study. Results of experiments on selected quartz. a, b Stepwise ThD of NRM. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. c Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements. d Hysteresis loop after slope correction. The slope correction coefficient was + 74.8 nAm2/T Rock-magnetic properties of plagioclase A histogram of NRM intensity for the plagioclase crystals is shown in Fig. 3c. In contrast to the zircon and quartz samples, very high population of the plagioclase samples (224 out of 268; 84%) exhibited the significant NRM intensities. Figure 6a shows a diagram between NRM intensities and IRM intensities for the 75 plagioclase grains. NRM/IRM distributes in a narrow range, namely around 0.1 (Fig. 6b), indicating the similar magnetic carrier and NRM origin among the plagioclase grains. The NRM/IRM ratio around 0.1 is higher than the TRM(50 μT)/IRM ratio for synthetic samples which resemble rocks (Yu 2010), and consistent with, but somewhat lower than the previous reports on plagioclase crystals (Usui et al. 2015) or rocks containing exsolved magnetite (Selkin et al. 2007). Results of experiments on plagioclase. a NRM intensity plotted as a function of IRM intensity. Horizontal and vertical dashed lines indicate the threshold for significant remanence. b Histogram of NRM intensity divided by IRM intensity. c Representative hysteresis loop after slope correction. The slope correction coefficient was + 61.4 nAm2/T. d Day plot of quartz and plagioclase grains shown with previous reports by Wakabayashi et al. (2006). Closed symbols represent results of this study. Reversed triangle indicates quartz, and circle indicates plagioclase. Opened symbols represent results of Wakabayashi et al. (2006). Triangle, circle and square indicate non-separated chips, feldspar fraction and biotite fraction, respectively. Dotted lines show the SD-MD magnetite mixture trend after Channell and McCabe (1994) and Parry (1982). e Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements. f Stepwise ThD of TRM given at 50 μT compared to that of TRM1 of the whole-rock sample ITG09b-34-1 (Wakabayashi et al. 2006). Dashed lines for plagioclase grains and solid line for whole rock. Solid gray line indicates TRM value before ThD We performed magnetic hysteresis and low-temperature magnetometry measurements on four selected grains with different NRM/IRM ratios. All of the four samples exhibited similar features in both hysteresis and low-temperature magnetization. Results on the hysteresis measurements fall in the PSD region of a Day plot (Fig. 6d) and are concentrated in a narrower region of the diagram compared to the whole rock. This indicates that the magnetite in plagioclase crystals has narrower range of grain size than that in the whole rock. The Verwey transition of magnetite was clearly observed at approximately 120 K (Fig. 6e), indicating a very low titanium content of magnetite (Özdemir et al. 1993; Moskowitz et al. 1998). Larger remanence in the FC curve relative to the ZFC curve (Fig. 6e) also suggests a dominance of fine-grained magnetite (Moskowitz et al. 1993; Carter-Stiglitz et al. 2001, 2002; Kosterov 2003). After paleointensity measurements, we took hysteresis measurements on four samples and low-temperature measurements on one sample. The results were similar to those shown in Fig. 6c, e, which implies that the double heating during paleointensity measurements did not severely affect the magnetic characteristics of plagioclase grains. A distribution of the blocking temperature was investigated on four samples after paleointensity experiments. Results show a very narrow blocking temperature distribution around 530–580 °C (Fig. 6f). Figure 7 is a microscopic image of a polished plagioclase sample (sample no. 68 in Table 2). The tiny opaque minerals with rounded to needle-like shapes are uniformly distributed in the host plagioclase and showed no association with cracks. These opaque minerals are probably magnetite, and the texture implies that the magnetites were not generated by secondary alternation but rather have a primary origin such as incorporation during plagioclase crystallization or exsolution at subsolidus conditions. Furthermore, the needle-like shape of magnetite and their preferred alignment relative to the feldspar suggests an origin via exsolution because magnetite tends to form equant octahedral crystals when crystallizing from a magma. The needle-like grains could be categorized to SD state due to particle length (few micron in most but up to > 10 um) and width length ratio of < 0.1 (Dunlop and Özdemir 1997). The round-shaped grains are possibly in the PSD state. Microscopic image of a polished single plagioclase crystal. Stacking of snaps of different focal depths To summarize, rock-magnetic measurements of plagioclase samples indicate that the plagioclase crystals contain nearly-pure needle-like-shaped SD and PSD magnetite with width less than a few micron and various aspect ratio and are suitable for paleointensity measurements. Paleointensity experiments of plagioclase We conducted Tsunakawa–Shaw paleointensity experiments on 17 plagioclase grains. In consideration of the sensitivity of the instrument, samples with NRM intensities larger than 5 × 10−11 Am2 were chosen for the experiments. Taking weak remanences of single-crystal samples into account, we employed a slightly different selection criteria from the study by Yamamoto and Tsunakawa (2005) which worked on strong remanences of volcanic whole rocks. The criteria we adopted are: A primary component found in an orthogonal plot of NRM demagnetization f > 0.3 in a NRM–TRM1* plot R > 0.90 in a NRM–TRM1* plot Slope of a TRM1–TRM2* plot within 1 ± 0.1 R > 0.95 in a TRM1–TRM2* plot where f is the NRM fraction of the primary component and R is the correlation coefficients. Primary components were identified on an orthogonal plot of NRM, and it associated with MAD values < 16°. By the LTD treatment, 5–15% of the NRM was demagnetized. The demagnetized components by LTD are attributed to the remanence carried by PSD magnetite (Heider et al. 1992). By the above criteria, 9 out of 17 results were selected: two results were rejected by the criteria of 1 and/or 3; six results were rejected due to the criterion of 4. Typical examples of successful and rejected results are, respectively, presented in Figs. 8, 9 and 10. Results of all 17 samples are summarized in Table 1. Paleointensities obtained from the nine results range between 43.1 and 77.9 μT, yielding an average of 57.4 μT and a standard deviation of 11.8 μT. Figure 11 shows an individual plagioclase paleointensity as well as their average together with the average paleointensity reported from the whole rocks. The plagioclase average is in good agreement with the average paleointensity reported from the whole rocks, though the dispersion of plagioclase paleointensity is slightly larger than that of the whole rocks. A representative result of successful paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. The dotted line indicates where the horizontal and vertical axes are equal. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively Example of failed paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. TRM1/TRM2* slope severely exceeds 1 Example of failed paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. No linear portion in the NRM/TRM1* plot Table 1 Results on Paleointensity measurements of plagioclase samples Summary of paleointensity and error (1σ) of plagioclase compared with the whole rock. Black circles indicate results of each plagioclase grain. Green circle denotes the mean of nine plagioclase grains. Red square marks the mean of whole-rock measurements (Tsunakawa et al. 2009) Two protocols were employed for handling the anisotropy bias on paleointensity ("Paleointensity experiments" section). In both protocols, it was technically difficult to impart ARM0 accurately parallel to ChRM. Therefore, the anisotropy bias on each sample would not be corrected completely. The angular differences between ChRM and ARM0 were 24° at most. The possible canceling of anisotropy bias by averaging a number of samples is discussed in "Anisotropy effect on paleointensity" section. The protocol in which ARM1, ARM2, TRM1 and TRM2 were given along the Y axis seems to be more reproducible for the present sample configuration, though the number of studied samples was not enough to determine which protocol was more suitable. We found that ARM was larger than TRM in all plagioclase samples, in contrast to the whole-rock sample. This has been reported as a peculiar feature of exsolved magnetite by Usui et al. (2015). Remanence anisotropy of plagioclase ARM anisotropy tensors were estimated from the measured ARMx, ARMy, and ARMz for each plagioclase grain. Eigenvalues and anisotropy parameters, corrected anisotropy degree Pj, and shape factor Tj (Jelinek 1981) were calculated (Table 2). Positive and negative values of Tj indicate that the shape of the anisotropy ellipsoids is oblate and prolate, respectively. Median of Pj is 3.35 and Tj varies from + 0.72 to − 0.80. Typical results of analysis on anisotropy directions and anisotropy parameters are shown in Fig. 12. The directions of the anisotropy axes are identical for ARM and TRM, and do not change by AFD. Hence, we used the ARM anisotropy tensor as a proxy for TRM anisotropy tensor in the discussion in "Anisotropy effect on paleointensity" section. Pj increased after AFD, which could be reasonable considering that grains with high aspect ratio correspond to high coercivity components. The whole-rock ARM was isotropic (Pj = 1.2, Tj = − 0.28). Table 2 Anisotropy parameters of plagioclase samples Representative results on the anisotropy axes measurement on single plagioclase crystals. Sample holder coordinate (not oriented). Left side, sample with prolate anisotropy; right side, sample with oblate anisotropy Anisotropy effect on paleointensity Our paleointensity value could be either larger or smaller than the true value depending on angles between the anisotropy axes and directions of ancient or laboratory fields (Paterson 2013). In our paleointensity experiments, anisotropy bias is mainly caused by the directional difference between the external field that gave the NRM, and the laboratory field that gave the ARM0. Since the whole-rock sample was isotropic, the plagioclase grains should be randomly oriented in the host rock assuming that the ARM of whole rock is mainly carried by plagioclase-hosted magnetite. Therefore, the direction of the external field which gave NRM should be random against the anisotropy axes of each plagioclase grain. We calculated the anisotropy bias between two remanence vectors given by randomly oriented external fields with same intensity using the typical anisotropy tensor with eigenvalues (w1, w2, w3) = (1.56, 0.90, 0.49), assuming that the laboratory field was also randomly oriented against the anisotropy axes. Figure 13 shows the anisotropy bias (ratio of the intensity of two remanence vectors) as a function of the angular difference between two remanence vectors (NRM and ARM0). In the present study, we obtained paleointensity results from nine plagioclase samples; the angular difference between NRM and ARM0 for each sample was below 25°. In this condition, the anisotropy bias averaged for nine samples was within 1 ± 0.1, and the standard deviation was below 25%. Therefore, anisotropy bias is likely canceled by averaging paleointensity results from nine samples in this study. Also, the variation of the experimental results (~ 20% of the mean value) was consistent with our calculation. We concluded that accurate paleointensity information can be derived from the mean paleointensity values of an assembly of single plagioclase crystals, while the large dispersion of paleointensity values is constitutional for the randomly oriented anisotropic grain assemblage. Anisotropy bias calculated from the typical anisotropy tensor of plagioclase samples as a function of angle between two remanence vectors (here assumed to be NRM and ARM0). An anisotropy bias larger than 1 indicates that the paleointensity is overestimated, and vice versa Based on anisotropy measurements of plagioclase crystals separated from an Archean granitoid, Usui et al. (2015) demonstrated that (1) geometric mean instead of arithmetic mean should be used, and (2) tens of crystals would be needed to achieve reliable paleointensity estimates. In the present study, the geometric mean and the arithmetic mean are similar in the range of standard deviation, so fewer crystals are required. This difference can be attributed to the variation of the anisotropy effect; the anisotropy tensor they used for estimates was more anisotropic (corresponding to Pj = 6.21 and Tj = 0.34) than that used in the present study (corresponding to Pj = 3.18 and Tj = 0.04). Since the shape and fabric of exsolved magnetite vary among samples, the anisotropy effect and how to get rid of it need to be studied carefully for each rock. In addition to remanence anisotropy, nonlinear TRM acquisition is a major issue of exsolved magnetite paleomagnetism. In the case of the studied sample, the NRM/IRM ratio was lower than the TRM/IRM ratio of previously studied plagioclase crystals (Usui et al. 2015) and rocks containing exsolved magnetite (Selkin et al. 2007). This implies that the nonlinear TRM acquisition may be insignificant for the obtained paleointensity range (~ 60 μT). Also, there is a possibility that NRM of exsolved magnetite is a thermochemical remanent magnetization (TCRM) rather than a TRM since the formation temperature of exsolved magnetite in plagioclase is not clear (Feinberg et al. 2005). In that case, obtained paleointensity could give the lower limit of the field strength at the age, as TCRM acquisition is less efficient than TRM acquisition (Stacey and Banerjee 1974; Usui and Nakamura 2009). Comparison of magnetic carriers of plagioclase and whole-rock samples Wakabayashi et al. (2006) and Tsunakawa et al. (2009) predicted that most of the stable remanence of the Iritono granite was carried by magnetite inclusions in plagioclase. However, detailed rock-magnetic experiments on plagioclase grains compared to the previously reported whole-rock studies revealed that the distribution of blocking temperatures and grain sizes are different between plagioclase crystals and the whole rock. The pTRM distributions do not show any concentration in a particular temperature interval below 550 °C for plagioclase samples, while about 10% of the whole-rock TRM is carried by a low blocking temperature (300–500 °C) component. Because the Iritono granite contains magnetite and pyrrhotite ("Rock-magnetic properties of zircon, Rock-magnetic properties of quartz, Rock-magnetic properties of plagioclase" sections; Wakabayashi et al. 2006; Tsunakawa et al. 2009), the low blocking temperature component above 350 °C found in the whole-rock TRM can be attributed to coarse-grained PSD and MD magnetites. Hysteresis loop measurements also indicate that magnetite in plagioclase has a narrow range of grain size compared to the whole rock (Fig. 6d). In addition, the results of the whole-rock experiments exhibit a bimodal distribution according to different paleointensity methods which suggest the influence of non-ideal magnetic minerals, and alternation of such minerals which could not be detected or suppressed completely. On the other hand, plagioclase samples mostly contain nearly-pure, fine-grained magnetite as the magnetic carriers. Nevertheless, the magnetic carrier of plagioclase crystals and the whole rock was different in terms of distribution of grain size and blocking temperatures, and the estimated paleointensity was consistent among them. Therefore, we conclude that a reliable paleointensity was obtained successfully. Because of the more 'ideal' magnetic carrier, paleointensity experiments on single plagioclase with exsolved magnetite inclusions can potentially give more reliable and informative paleointensity results than the conventional whole-rock experiments. Effect of cooling rate on paleointensity The extremely slow cooling of granitic rocks compared to laboratory timescales may require a correction to the paleointensity estimate due to the time dependence of the acquisition of TRM, which varies by size and aspect ratio of the magnetic grains (Halgedahl et al. 1980; Selkin et al. 2000; Yu 2011). Results on magnetic hysteresis measurements of plagioclase grains are plotted in the PSD region on the Day plot (Fig. 6d), which could be interpreted as a mixture of a range of grain size and aspect ratio. The stable remanence that is involved in paleointensity measurements is carried by SD to PSD magnetite. Based on SD theory (Halgedahl et al. 1980; Selkin et al. 2000) and the estimated cooling time of the Iritono granite body, Tsunakawa et al. (2009) argued that the ratio of TRM in nature to TRM in laboratory would be 1.5 for the SD components. On the other hand, PSD grains have insignificant cooling rate dependence on TRM acquisition (Yu 2011). The cooling rate corrected paleointensity value of 38.2 ± 7.9 μT assuming SD magnetite gives the lower limit of the paleointensity, since the PSD magnetite should give higher paleointensity value. Therefore, corresponding cooling rate corrected VDM value of 8.9 ± 1.8 × 1022 Am2 using the paleointensity value of plagioclase crystals and inclination of the H component of the whole rock in Wakabayashi et al. (2006) can impose a constraint on the lower limit of paleointensity at the age of 115 Ma. Significance of Shaw-type paleointensity methods on single crystals This is the first report applying the Tsunakawa–Shaw paleointensity method to single grain samples. Considering that several results were rejected because of severe alternation during laboratory heating, a Shaw-type method, in which number of heating in laboratory are minimized, seems to be more appropriate than a Thellier-type method. Furthermore, the ThD curves of plagioclase crystals (Fig. 6f) show a very narrow distribution of blocking temperature below the Curie temperature of magnetite (530–580 °C), while the AFD curve (Top-right diagram in Fig. 8) show broad distribution of coercivity (50–150 mT). This emphasizes an advantage to estimate a paleointensity not in a blocking temperature space (by a Thellier-type method) but in a coercivity space (by a Shaw-type method), especially for a magnetically weak sample such as single crystals. Thus the Tsunakawa–Shaw method could be more suitable than the Thellier–Thellier methods in the case of plagioclase sample containing exsolved magnetite, while these methods should be compared in the future paleointensity study using appropriate samples. Paleointensity during middle CNS Considering the possible TCRM origin of NRM and the contribution of PSD grains on the cooling rate correction, the VDM value of 8.9 ± 1.8 × 1022 Am2 gives the lower limit of the time-averaged field strength during the middle age of CNS. Average field strength of the periods of frequent reversals have been estimated as the VDM value of the past 5 million years from the Society Islands volcanic rocks (3.6× 1022 Am2; Yamamoto and Tsunakawa 2005), and the virtual axial dipole moment (VADM) value of 0–160 Ma excluding CNS period from submarine basalt glass samples (4.8× 1022 Am2; Tauxe 2006). The present result suggests that the time-averaged field strength during middle CNS was several times as large as that of non-superchron periods, supporting the prediction by dynamo models and simulations (e.g. Larson and Olson 1991; Glatzmaier et al. 1999; Kutzner and Christensen 2002; Christensen and Aubert 2006; Olson and Christensen 2006; Courtillot and Olson 2007; Takahashi et al. 2008; Olson et al. 2010). By applying the present paleointensity method to various granitic rocks from different ages, we may be able to improve our understanding of the long-term behavior of the geomagnetic field concerning the mantle convection process without the complications of unideal magnetic minerals that often compromise such work. We have evaluated the utility of using single silicate crystals separated from granitic rocks in the exploration the long-term evolution of the intensity of the geomagnetic field. We studied the rock-magnetic properties of zircon, quartz and plagioclase separated from the Iritono granite whose paleointensity was already well constrained by past studies using whole-rock samples. In our samples we found that plagioclase was the most suitable mineral phase to study, which was more reliably and stably magnetic than other minerals like zircon or quartz. We conducted paleointensity experiments on 17 plagioclase grains using the Tsunakawa–Shaw method. Nine samples were successful and gave mean paleointensity values of 57.4 ± 11.8 μT. This value is consistent with the previously reported whole-rock paleointensity, suggesting that an assembly of single plagioclase crystals separated from a granitic rock has the ability to yield the accurate paleointensity data. Considering the unknown forming temperature of exsolved magnetite and cooling rate effect on TRM acquisition, time-averaged VDM is estimated to be higher than 8.9 ± 1.8× 1022 Am2 at the age of 115 Ma, suggesting high dipole strength during the middle age of CNS. Biggin AJ, Strik GH, Langereis CG (2009) The intensity of the geomagnetic field in the late-Archaean: new measurements and an analysis of the updated IAGA palaeointensity database. Earth Planets Space 61(1):9–22. https://doi.org/10.1186/BF03352881 Biggin AJ, Steinberger B, Aubert J, Suttie N, Holme R, Torsvik TH, van der Merr DG, Van Hinsbergen DJJ (2012) Possible links between long-term geomagnetic variations and whole-mantle convection processes. Nat Geosci 5(8):526–533. https://doi.org/10.1038/ngeo1521 Carter-Stiglitz B, Moskowitz B, Jackson M (2001) Unmixing magnetic assemblages and the magnetic behavior of bimodal mixtures. J Geophys Res Solid Earth 106(B11):26397–26411. https://doi.org/10.1029/2001JB000417 Carter-Stiglitz B, Jackson M, Moskowitz B (2002) Low-temperature remanence in stable single domain magnetite. Geophys Res Lett 29(7):33-1. https://doi.org/10.1029/2001GL014197 Channell JET, McCabe C (1994) Comparison of magnetic hysteresis parameters of unremagnetized and remagnetized limestones. J Geophys Res Solid Earth 99(B3):4613–4623. https://doi.org/10.1029/93JB02578 Christensen UR, Aubert J (2006) Scaling properties of convection driven dynamos in rotating spherical shells and application to planetary magnetic fields. Geophys J Int 166:97–114. https://doi.org/10.1111/j.1365-246X.2006.03009.x Coe RS (1967) Determination of paleo-intensities of the Earth's magnetic field with emphasis on mechanisms which could cause non-ideal behaviour in Thellier's method. J Geomagn Geoelectr 19:157–179 Cottrell RD, Tarduno JA (1999) Geomagnetic paleointensity derived from single plagioclase crystals. Earth Planet Sci Lett 169(1):1–5. https://doi.org/10.1016/S0012-821X(99)00068-0 Cottrell RD, Tarduno JA, Roberts J (2008) The Kiaman Reversed Polarity Superchron at Kiama: toward a field strength estimate based on single silicate crystals. Phys Earth Planet Inter 169(1):49–58. https://doi.org/10.1016/j.pepi.2008.07.041 Courtillot V, Olson P (2007) Mantle plumes link magnetic superchrons to Phanerozoic mass depletion events. Earth Planet Sci Lett 260(3):495–504. https://doi.org/10.1016/j.epsl.2007.06.003 Day R, Fuller M, Schmidt VA (1977) Hysteresis properties of titanomagnetites: grain-size and compositional dependence. Phys Earth Planet Int 13(4):260–267. https://doi.org/10.1016/0031-9201(77)90108-X Dunlop DJ, Özdemir Ö (1997) Rock magnetism—fundamentals and frontiers. Cambridge University Press, Cambridge Book Google Scholar Feinberg JM, Scott GR, Renne PR, Wenk HR (2005) Exsolved magnetite inclusions in silicates: features determining their remanence behavior. Geology 33(6):513–516. https://doi.org/10.1130/G21290.1 Fu RR, Weiss BP, Lima EA, Kehayias P, Araujo JFDF, Glenn DR, Gelb J, Einsle JF, Bauer AM, Harrison RJ, Ali GAH, Walsworth RL (2017) Evaluating the paleomagnetic potential of single zircon crystals using the Bishop Tuff. Earth Planet Sci Lett 458:1–13. https://doi.org/10.1016/j.epsl.2016.09.038 Glatzmaier GA, Coe RS, Hongre L, Roberts PH (1999) The role of the Earth's mantle in controlling the frequency of geomagnetic reversals. Nature 401(6756):885–890. https://doi.org/10.1038/44776 Halgedahl SL, Day R, Fuller M (1980) The effect of cooling rate on the intensity of weak-field TRM in single-domain magnetite. J Geophys Res Solid Earth 85(B7):3690–3698. https://doi.org/10.1029/JB085iB07p03690 Heider F, Dunlop DJ, Soffel HC (1992) Low-temperature and alternating field demagnetization of saturation remanence and thermoremanence in magnetite grains (0.037 μm to 5 mm). J Geophys Res Solid Earth 97(B6):9371–9381. https://doi.org/10.1029/91jb03097 Jelinek V (1981) Characterization of the magnetic fabric of rocks. Tectonophysics 79(3–4):T63–T67. https://doi.org/10.1016/0040-1951(81)90110-4 Kosterov A (2003) Low-temperature magnetization and AC susceptibility of magnetite: effect of thermomagnetic history. Geophys J Int 154(1):58–71. https://doi.org/10.1046/j.1365-246X.2003.01938.x Kutzner C, Christensen U (2002) From stable dipolar towards reversing numerical dynamos. Phys Earth Planet Int 121:29–45. https://doi.org/10.1016/S0031-9201(02)00016-X Larson RL, Olson P (1991) Mantle plumes control magnetic reversal frequency. Earth Planet Sci Lett 107(3–4):437–447. https://doi.org/10.1016/0012-821X(91)90091-U Mochizuki N, Tsunakawa H, Oishi Y, Wakai S, Wakabayashi KI, Yamamoto Y (2004) Palaeointensity study of the Oshima 1986 lava in Japan: implications for the reliability of the Thellier and LTD-DHT Shaw methods. Phys Earth Planet Inter 146(3):395–416. https://doi.org/10.1016/j.pepi.2004.02.007 Moskowitz BM, Frankel RB, Bazylinski DA (1993) Rock magnetic criteria for the detection of biogenic magnetite. Earth Planet Sci Lett 120(3–4):283–300. https://doi.org/10.1016/0012-821X(93)90245-5 Moskowitz BM, Jackson M, Kissel C (1998) Low-temperature magnetic behavior of titanomagnetites. Earth Planet Sci Lett 157:141–149. https://doi.org/10.1016/S0012-821X(98)00033-8 Muxworthy AR, Evans ME (2012) Micromagnetics and magnetomineralogy of ultrafine magnetite inclusions in the Modipe Gabbro. Geochem Geophys Geosyst 14(4):921–928. https://doi.org/10.1029/2012GC004445 Olson P, Christensen UR (2006) Dipole moment scaling for convection-driven planetary dynamos. Earth Planet Sci Lett 250:561–571. https://doi.org/10.1016/j.epsl.2006.08.008 Olson PL, Coe RS, Driscoll PE, Glatzmaier GA, Roberts PH (2010) Geodynamo reversal frequency and heterogeneous core–mantle boundary heat flow. Phys Earth Planet Int 180(1–2):66–79. https://doi.org/10.1016/j.pepi.2010.02.010 Özdemir Ö, Dunlop DJ, Moskowitz BM (1993) The effect of oxidation on the Verwey transition in magnetite. Geophys Res Lett 20(16):1671–1674. https://doi.org/10.1029/93GL01483 Parry LG (1982) Magnetization of immobilized particle dispersions with two distinct particle sizes. Phys Earth Planet Int 28(3):230–241. https://doi.org/10.1016/0031-9201(82)90004-8 Paterson GA (2013) The effects of anisotropic and non-linear thermoremanent magnetizations on Thellier-type paleointensity data. Geophys J Int 193(2):694–710. https://doi.org/10.1093/gji/ggt033 Sato M, Yamamoto S, Yamamoto Y, Okada Y, Ohno M, Tsunakawa H, Maruyama S (2015) Rock-magnetic properties of single zircon crystals sampled from the Tanzawa tonalitic pluton, central Japan. Earth Planets Space 67(1):150. https://doi.org/10.1186/s40623-015-0317-9 Selkin PA, Gee JS, Tauxe L, Meurer WP, Newell AJ (2000) The effect of remanence anisotropy on paleointensity estimates: a case study from the Archean Stillwater Complex. Earth Planet Sci Lett 183(3):403–416. https://doi.org/10.1016/S0012-821X(00)00292-2 Selkin PA, Gee JS, Tauxe L (2007) Nonlinear thermoremanence acquisition and implications for paleointensity data. Earth Planet Sci Lett 256(1):81–89. https://doi.org/10.1016/j.epsl.2007.01.017 Selkin PA, Gee JS, Meurer WP, Hemming SR (2008) Paleointensity record from the 2.7 Ga Stillwater Complex, Montana. Geochem Geophys Geosyst. https://doi.org/10.1029/2008gc001950 Shcherbakova VV, Bakhmutov VG, Shcherbakov VP, Zhidkov GV, Shpyra VV (2012) Palaeointensity and palaeomagnetic study of Cretaceous and Palaeocene rocks from Western Antarctica. Geophys J Int 189(1):204–228. https://doi.org/10.1111/j.1365-246X.2012.05357.x Stacey FD, Banerjee SK (1974) The physical principles of rock magnetism. Elsevier, New York Takahashi F, Tsunakawa H, Matsushima M, Mochizuki N, Honkura Y (2008) Effects of thermally heterogeneous structure in the lowermost mantle on the geomagnetic field strength. Earth Planet Sci Lett 272(3):738–746. https://doi.org/10.1016/j.epsl.2008.06.017 Tanaka H, Kono M (2002) Paleointensities from a Cretaceous basalt platform in Inner Mongolia, northeastern China. Phys Earth Planet Int 133(1):147–157. https://doi.org/10.1016/S0031-9201(02)00091-2 Tarduno JA, Cottrell RD, Smirnov AV (2001) High geomagnetic intensity during the mid-Cretaceous from Thellier analyses of single plagioclase crystals. Science 291(5509):1779–1783. https://doi.org/10.1126/science.1057519 Tarduno JA, Cottrell RD, Smirnov AV (2002) The Cretaceous superchron geodynamo: observations near the tangent cylinder. Proc Natl Acad Sci 99:14020–14025. https://doi.org/10.1073/pnas.222373499 Tarduno JA, Cottrell RD, Smirnov AV (2006) The paleomagnetism of single silicate crystals: recording geomagnetic field strength during mixed polarity intervals, superchrons, and inner core growth. Rev Geophys. https://doi.org/10.1029/2005rg000189 Tarduno JA, Cottrell RD, Watkeys MK, Bauch D (2007) Geomagnetic field strength 3.2 billion years ago recorded by single silicate crystals. Nature 446(7136):657–660. https://doi.org/10.1038/nature05667 Tarduno JA, Cottrell RD, Watkeys MK, Hofmann A, Doubrovine PV, Mamajek EE, Liu D, Sibeck DG, Neukirch LP, Usui Y (2010) Geodynamo, solar wind, and magnetopause 3.4 to 3.45 billion years ago. Science 327(5970):1238–1240. https://doi.org/10.1126/science.1183445 Tarduno JA, Blackman EG, Mamajek EE (2014) Detecting the oldest geodynamo and attendant shielding from the solar wind: Implications for habitability. Phys Earth Planet Int 233:68–87. https://doi.org/10.1016/j.pepi.2014.05.007 Tarduno JA, Cottrell RD, Davis WJ, Nimmo F, Bono RK (2015) A Hadean to Paleoarchean geodynamo recorded by single zircon crystals. Science 349(6247):521–524. https://doi.org/10.1126/science.aaa9114 Tauxe L (2006) Long-term trends in paleointensity: the contribution of DSDP/ODP submarine basaltic glass collections. Phys Earth Planet Inter 156(3):223–241. https://doi.org/10.1016/j.pepi.2005.03.022 Thellier E, Thellier O (1959) Sur I'intensite du champ magnetique terrestre dans le passe historique et geologique. Ann Geophys 15:285–376 Tsunakawa H, Shaw J (1994) The Shaw method of palaeointensity determinations and its application to recent volcanic rocks. Geophys J Int 118(3):781–787. https://doi.org/10.1111/j.1365-246X.1994.tb03999.x Tsunakawa H, Wakabayashi KI, Mochizuki N, Yamamoto Y, Ishizaka K, Hirata T, Takahashi F, Seita K (2009) Paleointensity study of the middle Cretaceous Iritono granite in northeast Japan: implication for high field intensity of the Cretaceous normal superchron. Phys Earth Planet Int 176(3):235–242. https://doi.org/10.1016/j.pepi.2009.07.001 Usui Y (2013) Paleointensity estimates from oceanic gabbros: effects of hydrothermal alteration and cooling rate. Earth Planets Space 65(9):985–996. https://doi.org/10.5047/eps.2013.03.015 Usui Y, Nakamura N (2009) Nonlinear thermoremanence corrections for Thellier paleointensity experiments on single plagioclase crystals with exsolved magnetites: a case study for the Cretaceous Normal Superchron. Earth Planets Space 61(12):1327–1337. https://doi.org/10.1186/BF03352985 Usui Y, Shibuya T, Sawaki Y, Komiya T (2015) Rock magnetism of tiny exsolved magnetite in plagioclase from a Paleoarchean granitoid in the Pilbara craton. Geochem Geophys Geosyst 16(1):112–125. https://doi.org/10.1002/2014GC005508 Wakabayashi KI, Tsunakawa H, Mochizuki N, Yamamoto Y, Takigami Y (2006) Paleomagnetism of the middle Cretaceous Iritono granite in the Abukuma region, northeast Japan. Tectonophysics 421(1):161–171. https://doi.org/10.1016/j.tecto.2006.04.013 Weiss BP, Maloof AC, Tailby N, Ramezani J, Fu RR, Hanus V, Trail D, Watson EB, Harrison TM, Bowring SA, Kirschvink JL, Swanson-Hysell NL, Coe RS (2015) Pervasive remagnetization of detrital zircon host rocks in the Jack Hills, Western Australia and implications for records of the early geodynamo. Earth Planet Sci Lett 430:115–128. https://doi.org/10.1016/j.epsl.2015.07.067 Wenk HR, Chen K, Smith R (2011) Morphology and microstructure of magnetite and ilmenite inclusions in plagioclase from Adirondack anorthositic gneiss. Am Min 96(8–9):1316–1324. https://doi.org/10.2138/am.2011.3760 Tarduno JA, Cottrell RD (2005) Dipole strength and variation of the time-averaged reversing and nonreversing geodynamo based on Thellier analyses of single plagioclase crystals. J Geophys Res Solid Earth. https://doi.org/10.1029/2005jb003970 Yamamoto Y, Tsunakawa H (2005) Geomagnetic field intensity during the last 5 Myr: lTD-DHT Shaw palaeointensities from volcanic rocks of the Society Islands, French Polynesia. Geophys J Int 162(1):79–114. https://doi.org/10.1111/j.1365-246X.2005.02651.x Yamamoto Y, Tsunakawa H, Shibuya H (2003) Palaeointensity study of the Hawaiian 1960 lava: implications for possible causes of erroneously high intensities. Geophys J Int 153(1):263–276. https://doi.org/10.1046/j.1365-246X.2003.01909.x Yamamoto Y, Torii M, Natsuhara N (2015) Archeointensity study on baked clay samples taken from the reconstructed ancient kiln: implication for validity of the Tsunakawa–Shaw paleointensity method. Earth Planets Space 67(1):63. https://doi.org/10.1186/s40623-015-0229-8 Yu Y (2010) Paleointensity determination using anhysteretic remanence and saturation isothermal remanence. Geochem Geophys Geosyst. https://doi.org/10.1029/2009gc002804 Yu Y (2011) Importance of cooling rate dependence of thermoremanence in paleointensity determination. J Geophys Res Solid Earth. https://doi.org/10.1029/2011jb008388 Yu Y, Tauxe L, Genevey A (2004) Toward an optimal geomagnetic field intensity determination technique. Geochem Geophys Geosyst. https://doi.org/10.1029/2003gc000630 Zhang N, Zhong S (2011) Heat fluxes at the Earth's surface and core–mantle boundary since Pangea formation and their implications for the geomagnetic superchrons. Earth Planet Sci Lett 306(3–4):205–216. https://doi.org/10.1016/j.epsl.2011.04.001 YY and HT collected the samples. CK conducted the magnetic measurements. All contributed to discussion and writing the manuscript. All authors read and approved the final manuscript. We thank Shinji Yamamoto for petrological discussions. The microscopic photograph of plagioclase sample (Fig. 7) was taken by Yujiro Tamura. We thank lead guest editor John Tarduno and two anonymous reviewers for their constructive comments. Rock- and paleomagnetic measurements were taken under the cooperative research program of Center for Advanced Marine Core Research (CMCR), Kochi University (Accept Nos. 16A009, 16B009, 17A028 and 17B028). This work was supported by the Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientists (DC1) No. 15J11812. The data and materials used in this study are available on request to the corresponding author, Chie Kato ([email protected]). This work was supported by the Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientists (DC1) No. 15J11812. Department of Environmental Changes, Faculty of Social and Cultural Studies, Kyushu University, Fukuoka, Japan Chie Kato Department of Earth and Planetary Sciences, Tokyo Institute of Technology, Tokyo, Japan Chie Kato & Hideo Tsunakawa Department of Earth and Planetary Science, University of Tokyo, Tokyo, Japan Masahiko Sato Center for Advanced Marine Core Research, Kochi University, Kochi, Japan Yuhji Yamamoto Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, USA Joseph L. Kirschvink Earth-Life Science Institute, Tokyo Institute of Technology, Tokyo, Japan Hideo Tsunakawa Correspondence to Chie Kato. Kato, C., Sato, M., Yamamoto, Y. et al. Paleomagnetic studies on single crystals separated from the middle Cretaceous Iritono granite. Earth Planets Space 70, 176 (2018). https://doi.org/10.1186/s40623-018-0945-y Received: 03 July 2018 DOI: https://doi.org/10.1186/s40623-018-0945-y Paleointensity Single crystals 1. Geomagnetism Recent Advances in Geo-, Paleo- and Rock- Magnetism Frontier Letters
CommonCrawl
Removable singularity at 0 if the image of the punctured unit disc has finite area The following is taken from an old complex analysis qualifying exam. Let $\Delta$ denote the open unit disc. Suppose $f:\Delta\setminus\{0\}\rightarrow \mathbb{C}$ is holomorphic and assume that $$\int_{0<|x+iy|&lt1}|f(x+iy)|^2dxdy<\infty.$$ Prove that $f$ can be extended uniquely to a holomorphic function on $\Delta$. I would like to show that $|f(z)|$ is bounded in a neighborhood of 0, and then use Riemann's removable singularity theorem... but this is giving me trouble. I can use Cauchy's integral formula on $f^2$ to obtain $$|f(z)|^2\leq\frac{1}{\pi R^2}\int_0^{2\pi}\int_0^R|f(z+re^{i\theta})|^2r\,drd\theta,$$ where $R<|z|$. And this double integral is no greater than the given integral, which is finite. However, the $1/R^2$ is preventing me from concluding anything about boundedness near 0. complex-analysis John AdamskiJohn Adamski $\begingroup$ Note that the hypothesis you have is not that "the image has finite area", but "the squared modulus of the function value has a finite average". It is (at least a priori but perhaps only for non-analytic functions) possible so satisfy the finite-average condition but not the finite-area one from your question title. $\endgroup$ – hmakholm left over Monica Apr 24 '12 at 20:53 $\begingroup$ I would like to expand it in laurent series and then calculate that quantity explicitly in terms of the coefficients. It simple, a la the norm of a fourier series. $\endgroup$ – mike Apr 24 '12 at 22:02 $\begingroup$ Thanks. Plugging in the Laurent series was definitely the right way to begin. $\endgroup$ – John Adamski Apr 26 '12 at 19:39 Let us represent $f$ as a Laurent series inside the punctured unit disc using polar coordinates. $$f(re^{i\theta})=\sum_{n=-\infty}^\infty a_n(re^{i\theta})^n,\quad 0&ltr&lt1$$ Applying the given conditions, we have $$\infty>\int_0^1\int_0^{2\pi}\left(\sum_{n=-\infty}^\infty a_n(re^{i\theta})^n\right)\overline{\left(\sum_{m=-\infty}^\infty a_m(re^{i\theta})^m\right)}r\,d\theta dr$$ $$=\sum_{n=-\infty}^\infty\sum_{m=-\infty}^\infty a_n\overline{a_m}\int_0^1 r^{n+m+1}\left(\int_0^{2\pi} e^{i(n-m)\theta}\,d\theta\right) dr.$$ Note that the integral over $\theta$ is $2\pi$ when $n=m$ and 0 otherwise. Thus, we have $$\infty>2\pi\sum_{n=-\infty}^\infty|a_n|^2\int_0^1 r^{2n+1}\,dr.$$ Now, since the integral over $r$ is infinite for $n\leq-1$, we must conclude that $a_n=0$ for $n\leq-1$. In other words, $f$ does not have a pole or an essential singularity at 0. Thus, $f$ must have a removable singularity at 0, i.e. $f$ can be extended to a holomorphic function on $\Delta$. Not the answer you're looking for? Browse other questions tagged complex-analysis or ask your own question. Conformal map from the punctured unit disc onto the unit disc? Schwarz lemma problem Results following from Analyticity on a domain The derivative of harmonic function at origin is a bounded linear functional A holomorphic function on a punctured disc has removable singularity iff it can be approximated by polynomials on a circle Essential singularity question, showing $f(z)$ can not have an essential singularity at certain point. A confusing question about intergral $\int_{0}^{2\pi}\log|1-e^{i\theta}| d\theta$ A singularity at $0$ is removable if the complex function is square integrable. How to prove $|z - z_0|^{-1 + \epsilon}$ is bounded
CommonCrawl
Alternative zebra-structure models in solar radio emission by Chernov G.P. There is ongoing discussion in the literature about the nature of zebra-structure (ZS) in type IV radio bursts (Chernov, 2011). We showed the possibility of a model with whistler waves to explain many thin components of ZS stripes, taking into account the effects of scattering of the whistlers on fast particles. Here, we show some complex examples of observations of ZS bursts in different wave ranges, which are difficult to interpret within the framework of the Double Plasma Resonance (DPR) mechanism. Then we pass directly to theoretical problems. Figure 1. This dynamic spectrum shows the evolution of zebra-type bursts during 46 s, observed on 2004, December 1 with the Huairou radio station (Beijing). Initial fiber-type bursts transform into zebra patterns, as well as into decimetric millisecond spikes (from Chernov et al. 2017, Geo. and Aer. 57, 738). In the DPR model all the changes of ZS stripes are usually associated with changes of the magnetic field and with propagating fast magneto-acoustic waves. In a very complicated case the simultaneous presence of fast particles in a radio source with several different distribution functions was proposed (Zheleznyakov et al. 2016). However, the DPR conditions cannot change up to ten times per second and the complex zebra behavior, e.g. in Figure 1, cannot be explained by such changes (especially when the zebra is observed in the pulsating regime). The basic condition of the instability (the existence of DPR levels, if the scale of the magnetic field heights $L_B$ is much smaller than the scale of the plasma density heights $L_N$) seems obvious: at different DPR levels, where the upper hybrid frequency \(\omega_{UH}\) becomes equal to the integer of electronic cyclotron harmonics $s\omega_{Be}: \omega_{UH} = (\omega_{Pe}^2 + \omega_{Be}^2)^{1/2} = s\omega_{Be}$ (Zheleznyakov, Zlotnik, 1975). However, the use of hypothetical schemes of dependencies of cyclotron frequency harmonics $s\omega_{Be}$ and the plasma frequency $\omega_{Pe}$ as functions of the coordinate x without digital scales on the axes, immediately raised questions, both in the first papers and in many subsequent ones, including the last review by Zheleznyakov et al. (2016) (see e.g. Figure 3 from the last review). Then, Karlický and Yasnov (2015) showed that microwave zebras are preferentially generated in the regions with steep gradients of the plasma density as for example in the transition region, using the density model by Selhorst, et al. (2008). The narrow peak in the hybrid band in the calculations of the growth rate of plasma waves at the upper hybrid frequency was received with zero velocity dispersion, it was missed as the infinitesimal quantity. The DPR resonance condition is only valid in the zero-temperature limit. The number of works devoted to the improvement of the DPR mechanism continues to grow. A major contribution to this improvement was made by Karlický with co-authors. Benáčhek, Karlický,Yasnov (2017) analyzed effects of temperatures of phone plasma and hot electrons on zebra generation processes. They showered that for a relatively low temperature of the hot electrons (the thermal velocity of hot electrons $v_t = 0.1c$) the dependence of the growth-rate vs. the ratio of the electron plasma and electron cyclotron frequencies expresses distinct peaks (for three first harmonics) and with increasing this temperature ($ v_t = 0.2c$) these peaks are smoothed. Moreover, as seen there in Fig. 4, also the relative bandwidth of the growth rate maxima increases with the increase of the harmonic s. Such a behavior differs crucially from qualitative estimations shown previously (Zheleznyakov et al. (2016)). At the same time, in the whistler model all the aforementioned properties of ZS stripes mentioned above were explained by real physical processes occurring during the coalescence of Langmuir waves (l) with whistler waves (w): $l + w \Rightarrow t$ (Chernov, 2006). First of all, a radio source of ZS is usually located in magnetic islands after CME ejections. Therefore the close connection of ZS with fiber bursts is simply explained by the acceleration of fast particles in magnetic reconnection regions in the lower part and in the upper part of magnetic islands. Let us recall that wavelike or saw-tooth frequency drift of stripes was explained by the switching of whistler instability from the normal Doppler cyclotron resonance into the anomalous one, qualitatively shown in Figure 2 for the distribution function F (Chernov, 1990). Figure 2. Switching instability of whistlers from the normal Doppler effect to anomalous one, according to Gendrin, 1981; F – levels of a distribution function, E – levels of equal energy, D – electron diffusion directions (from Chernov, 1990). In the model with the whistlers many components of the dynamics of the ZS stripes are explained by the quasi-linear effects of the diffusion of whistlers on fast particles. The smooth switching of the predominant contribution from the anomalous to normal Doppler resonances (and inversely) occurs in accordance with the sign of an operator $\Lambda$ in the growth rate expression (Gendrin, 1981; Bespalov and Trahtengerts, 1980): \[ \Lambda=\frac{s\omega_B}{\omega V_\bot} \frac{\partial}{\partial V_\bot}+\left. \frac{k_\|}{\omega} \frac{\partial}{\partial V_\|} \right| _{ V_\| =(\omega – s\omega_B)/k_\|}\] It is known that in the normal Doppler effect, particles and wave propagate in the same direction, but in the anomalous Doppler effect, they propagate in opposite direction. This effect provide a smooth change of the whistler propagation direction and, consequently, a smooth change of the frequency drift of stripes. This effect is called a fan instability in the tokomak plasma (Parail and Pogutse, 1981). Such switching should lead to a synchronous change of the frequency drift of stripes and spatial drift of the radio source. New injections of fast particles cause sharp change in frequency drift of stripes in instantaneous ZS columns. Low frequency absorptions (black stripes of ZS) are explained by weakening of plasma wave instability due to the diffusion of fast particles by whistlers. The superfine structure is generated by pulsating regime of whistler instability with ion-sound waves (Chernov et al. 2003). Rope-like chains of fiber bursts are explained by periodic whistler instability between two fast shock fronts in a magnetic reconnection region (Chernov, 2006). In the whistler model zebra-stripes can be converted into fiber bursts and spikes (and back) as it is shown in Figure 2. This situation stimulates the search for new mechanisms. For example, earlier we showed the importance of explosive instability, at least, for the large flares with the ejections of protons. In the system the weakly relativistic beam of protons – nonisothermic plasma the slow beam mode of space charge possesses negative energy, and in the triplet slow and fast beam modes and ion-acoustic wave the explosive cascade of the harmonics of ionic sound is excited. The electromagnetic waves in the form of ZS stripes appears as a result of the fast protons scattering on these harmonics. Such a mechanism can also be promising for a ZS in the radio emission of the pulsar in the Crab nebula. We discussed the difficulties of the DPR mechanism according to the latest publications and showed the potential of some promising alternative mechanisms. For a comprehensive discussion of comparative analysis of observations of ZS and fiber bursts and different theoretical models we refer the reader to the reviews of Chernov (2012; 2016) (freely available at http://www.izmiran.ru/~gchernov/). Based on the recent paper: G.P.Chernov, Chapter IV in book «Research advances in astronomy», Ed. N.Mehler, Nova Sience Publishers, New York, 2018, pp. 119_146. ISBN: 978-1-53614-098-9 See also: https://arxiv.org/abs/1807.08818 Benáček J., Karlický M., Yasnov L.V. (2017) Temperature dependent growth rates of the upper-hybrid waves and solar radio zebra patterns. Astron. Astrophys. 598, A108. https://doi.org/10.1051/0004-6361/201629395. Chernov, G.P. (1990) Whistlers in the solar corona and their relevance to fine structures of type IV radio emission. Solar Phys.130. 75−82. Chernov, G.P. 2011, Fine structure of solar radio bursts, Springer ASSL 375, Heidelberg. Fomichev V.V., Fainshtein, S.M. and Chernov, G.P. (2009) A Possible Interpretation of the Zebra Pattern in Solar Radiation. Plasma Phys. Rep. 35, 1032-1035. DOI: 10.1134/S1063780X09120058. Zheleznyakov, V.V., Zlotnik, E. Ya., Zaitsev, V. V., and Shaposhnikov, V. E. (2016) Double plasma resonance and its manifestations in radio astronomy. Physics-Uspekhi, 59, N.10, 997-1020. https://doi.org/10.3367/UFNe.2016.05.037813 solar radio emission zebra-structure
CommonCrawl
Influence of antenatal physical exercise on haemodynamics in pregnant women: a flexible randomisation approach Rhiannon Emma Carpenter1, Simon J. Emery2, Orhan Uzun3, Lindsay A. D'Silva4 and Michael J. Lewis1Email author BMC Pregnancy and Childbirth201515:186 © Carpenter et al. 2015 Normal pregnancy is associated with marked changes in haemodynamic function, however the influence and potential benefits of antenatal physical exercise at different stages of pregnancy and postpartum remain unclear. The aim of this study was therefore to characterise the influence of regular physical exercise on haemodynamic variables at different stages of pregnancy and also in the postpartum period. Fifity healthy pregnant women were recruited and randomly assigned (2 × 2 × 2 design) to a land or water-based exercise group or a control group. Exercising groups attended weekly classes from the 20th week of pregnancy onwards. Haemodynamic assessments (heart rate, cardiac output, stroke volume, total peripheral resistance, systolic and diastolic blood pressure and end diastolic index) were performed using the Task Force haemodynamic monitor at 12–16, 26–28, 34–36 and 12 weeks following birth, during a protocol including postural manoeurvres (supine and standing) and light exercise. In response to an acute bout of exercise in the postpartum period, stroke volume and end diastolic index were greater in the exercise group than the non-exercising control group (p = 0.041 and p = 0.028 respectively). Total peripheral resistance and diastolic blood pressure were also lower (p = 0.015 and p = 0.007, respectively) in the exercise group. Diastolic blood pressure was lower in the exercise group during the second trimester (p = 0.030). Antenatal exercise does not appear to substantially alter maternal physiology with advancing gestation, speculating that the already vast changes in maternal physiology mask the influences of antenatal exercise, however it does appear to result in an improvement in a woman's haemodynamic function (enhanced ventricular ejection performance and reduced blood pressure) following the end of pregnancy. ClinicalTrials.gov NCT02503995. Registered 20 July 2015. Stroke Volume Late Pregnancy Total Peripheral Resistance Changes in haemodynamic function during 'normal' pregnancy have been relatively well characterised although differences in methodologies have led to some inconsistencies between reported findings. Healthy pregnancy is associated with marked changes in haemodynamic function, with increases in cardiac output (CO) of up to 50 % by late pregnancy [1–3]. However, the temporal patterns of change in heart rate (HR) and stroke volume (SV) that lead to this increase in CO are still being debated [4–9]. Systemic vascular resistance and diastolic blood pressure both decrease during pregnancy, reaching a nadir at around 25 weeks gestation [2, 9] and then gradually increasing until term [7–10], whilst systolic blood pressure remains unchanged [2, 8, 11, 12]. What is far less clear is the influence of antenatal physical exercise and an individual's 'training status' on haemodynamic function in pregnancy. Exercise training in healthy non-pregnant women results in a lower resting HR due to alterations in autonomic control of the heart subsequent to increases in SV and resting CO and a reduction in systolic blood pressure [13]. Changes in haemodynamic response are usually seen in healthy individuals within 3 to 12 weeks of starting an exercise training programme [13, 14] and display a dose–response relationship [15]. The Royal College of Obstetricians and Gynaecologists (RCOG) [16] currently recommend that previously sedentary women should begin with 15 min of continuous exercise three times each week, increasing gradually to 30 min four times each week, and thereafter daily. However, the specific type of exercise required to provoke a sustained change in cardiovascular function during pregnancy has not been determined. It is also highly debatable whether the RCOG guidelines are realistic in terms of likely adherence by pregnant women, and a more pragmatic approach to exercise guidance is needed. Most studies to date have assessed the acute haemodynamic response to a single bout of exercise [17–19] but have not considered the longer-term sustained changes that might result from a training programme and a change in physical fitness. Furthermore, some authors have suggested that the duration of an exercise programme initiated after conception will be too short to result in any significant haemodynamic changes above those already occurring during gestation [20, 21]. Early studies found that resting CO and SV were similar in trained and untrained women during late pregnancy, although resting HR was lower and SV was higher in trained women postpartum [22]. Similar changes were later reported [21], with no significant changes in HR, CO or SV in response to aerobic cycling exercise by late pregnancy, although the pattern of change was altered: peak values for these variables were observed at the end of the second trimester in the non-exercising control group and in the third trimester for the exercise group. These authors speculated that the additional late-pregnancy increase in CO in exercise trained women might be helpful in maintaining venous return and therefore in helping to prevent supine hypotension [21]. The potential benefits of altering haemodynamic function via antenatal exercise training still need to be clarified, but logically an increase in CO and changes in other haemodynamic variables could be advantageous for mother and baby. Further research is now required to more fully assess the haemodynamic changes that occur in response to a programme of antenatal physical exercise. The aim of this study was therefore to characterise the influence of regular physical exercise on haemodynamic variables at different stages of pregnancy and also in the postpartum period. Eligible participants were apparently healthy pregnant women aged 18 years or over, with no existing complications of pregnancy at their 12-week dating scan. Participants were recruited (1) through direct contact at the antenatal clinic (during the 12-week dating scan or via telephone), (2) via response to posters placed in the antenatal clinic, local GP surgeries, sports centres and antenatal exercise classes, (3) through advertisements placed on the Health Board website and in local newspapers, and (4) via emails sent to university and hospital staff. Exclusion criteria were: a history of cardiovascular or chronic respiratory problems, sleep apnoea, or central/peripheral nervous system disorder. Individuals who wanted to participate gave their written consent. Participants were informed that they were free to leave the study at any time and this would not affect their standard antenatal care. Ethical approval was obtained from the local (South West Wales) Research Ethics Committee and all procedures were conducted in accordance with the Declaration of Helsinki. Using a 2 × 2 × 2 design [23] participants were randomly assigned to one of three groups: (1) a control group, members of which did not undertake a formal exercise programme, (2) a land-based exercise group, and (3) a water-based exercise group (Fig. 1). Participants were asked a series of questions to determine the group to which they were to be assigned. At each stage they had the option to answer 'no' and were free to choose the group to which they preferred to belong. Flow diagram showing the principle of the 2 × 2 × 2 randomised design Exercise programmes Participants assigned to the exercise groups started their specific exercise programmes at 20-weeks' gestation and attended weekly classes until full-term or until they felt they could no longer undertake physical activity. All exercise classes were led or supervised by a qualified midwife. Exercise classes on land and in the water were of similar intensities, assessed via heart rate response. This was continually monitored using heart rate monitors (Polar FT1 Heart Rate Monitor, Polar Electro, Finland; Suunto Memory Belt, Suunto, Finland) and the BORG 'rating of perceived exertion' scale [24]. Land exercise classes comprised of 18 min of recumbent cycling, 10 min of stretching and toning exercises and 15 min of pelvic floor exercises. The recumbent cycling exercise (V-Fit BST-RC Recumbent Cycle, Beny Sports Co. UK Ltd., UK) consisted of a 3-min warm-up (with no resistance on the bike) followed by 15-min of continuous cycling. Exercise workload was increased by one 'level' on the bike every 2 min, until the participant reached the heart rate target zones for antenatal aerobic exercise suggested by the Royal College of Obstetrics and Gynaecology [16]. Once the target heart rate had been reached, participants were asked to remain exercising at that intensity for 10 min, followed by a cool down period of low resistance cycling to return heart rates to resting values. Water-based exercise classes consisted of a 10-min warm-up followed by 30-min of light-to-moderate intensity 'aquanatal' activities such as marching or jogging with various arm actions, weekly throughout pregnancy. Physiological monitoring was carried out on four occasions: at 12–16, 24–26 and 34–36 weeks gestational age, corresponding to the end of the three trimesters of pregnancy (T1, T2, T3) and also at 12-weeks postpartum (PP). All participants were asked to perform a series of postural manoeuvres and various interventions designed to provoke changes in the cardiovascular and autonomic nervous systems. Participants were asked to refrain from drinking tea, coffee, alcohol or eating a heavy meal within 2 h prior to assessment and to not exercise within 24 h prior to assessment. Anthropometric data for each of the participants was gathered at the start of each measurement session. Weight (Seca digital scales, Seca Ltd., UK) and height (Holtain Stadiometer, Holtain Ltd, UK) were recorded, and used to calculate body mass index (BMI). Two skinfold thickness measurements were taken (Hapenden Skinfold Calipers, British Indicators, West Sussex, UK): one on the bicep and the other on the anterior thigh and two circumference measurements were taken at the wrist and thigh, measured to the nearest 0.1 cm with a flexible tape. These measurements were then used to calculate the change in body fat during pregnancy (Eq. 1) and the body fat mass near term (Eq. 2) [25]. $$ \begin{array}{l} Fat\ Change,\ kg\\ {}\begin{array}{l}=0.77\left( weight\ change,\ kg\right)\hfill \\ {} + 0.07\left( change\ in\ thigh\ skinfold\ thickness,\ mm\right) - 6.13\hfill \end{array}\end{array} $$ $$ \begin{array}{l} Fat\ mass\ at\ week\ 37,\ kg\\ {}\begin{array}{l}=0.40\left( weight\ at\ week\ 37,\ kg\right)\hfill \\ {} + 0.16\left( biceps\ skinfold\ thickness\ at\ week\ 37,\ mm\right)\hfill \\ {} + 0.15\left( thigh\ skinfold\ thickness\ at\ week\ 37,\ mm\right)\hfill \\ {} - 0.09\left( wrist\ circumference\ at\ week\ 37,\ mm\right)\hfill \\ {} + 0.10\left( prepregnancy\ weight\right) - 6.56\hfill \end{array}\end{array} $$ Participants also completed a Pregnancy Physical Activity Questionnaire (PPAQ) [26] during each of the three antenatal measurement sessions to monitor changes in physical activity as pregnancy progressed. The questionnaire asked the women to record the amount of time they spent completing a number of activities including household chores and care giving (13 activities), work (5 activities), sport and exercise (8 activities), travelling (3 activities) and sedentary activities (3 activities) (Chasan-Taber et al., 2004). The questionnaire took approximately 10 min to complete. Experimental protocol Participants were first asked to lie in a 45° reclined-supine position for 6 min, after which they were asked to stand for the same duration. Participants then performed a light stepping exercise for 6 min, using the Nintendo Wii games console and 'balance board' platform (to provide a visual stimulus for exercise). This was followed by a 6-min seated recovery period. Participants then undertook a 3 min seated cognitive test (to provoke a sympathetic autonomic response), during which they were asked to repeatedly subtract the number 17 from a four digit number (this was performed silently). Participants then breathed synchronously with a metronome for 3 min at a rate of 20 breaths per minute (designed to initiate a parasympathetic response) and then returned to their normal (spontaneous) breathing pattern for 3 min. The total duration of the measurement protocol was 33 min. Physiological variables quantified Participants underwent continuous Holter ECG monitoring (Pathfinder/Lifecard Digital system; Spacelabs Medical Ltd., UK), providing ECG data with a 1024 Hz sampling frequency. The ECG recordings were assessed for quality by human observation using the Pathfinder system, primarily to verify the absence of excessive noise or artefact. Beat-to-beat cardiac interval (RR) was measured automatically by the Pathfinder system (using a proprietary algorithm) and visually assessed to identify and delete any obvious artefacts (which occurred infrequently, with less than 0.1 % of beats edited in this way). The Task Force Haemodynamic monitor (CNSystems Medizintechnik GMBH, Austria) recorded stroke volume (SV), systolic and diastolic blood pressures (SBP, DBP) on a beat-to-beat basis. The Task Force monitor quantifies SV via transthoracic bioelectrical impedance measurement, in which a small electrical current (<0.4 mA, 40 kHz) is passed into the thorax. This technique has been validated under a variety of conditions against the gold standard (but invasive) thermodilution technique [27] and provides accurate and reliable results. The Task Force monitor also provides continuous non-invasive arterial blood pressure measurement via vascular unloading assessment of the blood pressure in a finger artery. This method provides uninterrupted BP measurement that compares well with intra-arterial BP recordings [28]. The following haemodynamic variables were also quantified from the TFM data: heart rate (HR), cardiac output (CO), total peripheral resistance (TPR), vascular compliance and stiffness, left ventricular ejection time (LVET), end diastolic index (EDI, the end diastolic volume of the left ventricle divided by the body surface area) and cardiac index (CI, the cardiac output divided by the body surface area). Normality of the data was assessed using the Kolmogorov-Smirnof test. Repeated measures ANOVA with main factors 'Pregnancy Stage' (within-group repeated measure) and 'Exercise Status' (between-group measure) was used to assess the influence of exercise participation and advancing gestation on the measured physiological variables. Mauchly's test was consulted to assess the Sphericity of the data; if the assumption of Sphericity was violated then Wilks' Lambda multivariate tests were used, otherwise Sphericity was assumed. Post-hoc analysis was carried out with Bonferroni correction to identify the locations of significant 'difference effects' as appropriate. Independent samples t-tests were also used to assess between group differences at each of the pregnancy stages. Statistical significance was accepted as p < 0.05. Effect sizes were quantified as partial eta squared (η2). All data are presented as Mean ± SEM (standard error of the mean) and all error bars in the figures represent SEM. Fifity women completed all four antenatal assessments, at mean gestational ages of 14.6 ± 1.8, 25.4 ± 1.4 and 34.7 ± 1.6 weeks, and at 13.4 ± 1.8 weeks postpartum. Mean body mass index (BMI) at initial assessment was 24.6 ± 0.7 kg · m−2, increasing to 28.2 ± 0.8 kg · m−2 by late pregnancy for the control group and 26.4 ± 1.3 kg · m−2 increasing to 30.0 ± 1.5 kg · m−2 for the exercise group. BMI was not statistically different between groups at either time-point. Fat mass and the change in fat mass between T1 and T3 were not significantly different between the control and exercise groups (p = 0.389, p = 0.543 rspectively). Participant characteristics and pregnancy outcomes are displayed in Table 1 (a and b). Table 2 shows the activity levels assessed using the Pregnancy Physical Activity Questionnaire (PPAQ) during the first trimester (T1) and third trimester (T3) for the control and exercise groups. Total actvity was not statistically different between the control and exercise groups at either time-point (p = 0.070, p = 0.089 respectively for T1 and T3). Household activity was significantly higher in the control group during T1 (p = 0.004) but not during T3 (p = 0.059). Between T1 and T3 there was a significant increase in household activity in the exercise group and a reduction in household activity in the control group (p = 0.042). Moderate intensity exercise was significantly higher in the control group during T3 (p < 0.0005). Data from the water-based exercise class were excluded from the final statistical analysis owing to recruitment/retention of only a small number of participants in this group (n = 4). In the following, 'Exercise Group' therefore refers specifically to those participants who took part in the land-based exercise. Participant characteristics and pregnancy outcomes Control (n = 34) Exercise (n = 16) Maternal age at initial measurement (years) BMI at initial measurement (kg · m−2) < 18.5 > 30 BMI at 34 weeks (kg · m−2) Planned pregnancy Nulliparous Primi/Multiparous Previous (prior to pregnancy) Gestational age at birth (weeks) < 34 Method of delivery Prolonged rupture Low platelets Delivery time (hours:min)a 1st stage 3rd stage 0.02–0:25 Birth weight (g) Fat change (kg) −2.7–11.0 −3.5–6.4 Fat mass at 35 weeks (kg) aVaginal delivery group only MET Metabolic Equivalents Activity levels (Mean ± SEM) assessed using the Pregnancy Physical Activity Questionnaire (PPAQ) for the control and exercise groups during the first trimester (T1) and third trimester (T3) and the T3 vs. T1 change Change (T3-T1) Total activity (MET-h · week−1) 354.1 ± 32.9 −73.3 ± 54.5 Sedentary activity (MET-h · week−1) −8.9 ± 13.3 −12.2 ± 5.0 Light intensity (MET-h · week−1) 5.0 ± 9.6 Moderate intensity (MET-h · week−1) Vigorous intensity (MET-h · week−1) −1.6 ± 4.4 Household activity (MET-h · week−1) Occupational activity (MET-h · week−1) Sport/Exercise (MET-h · week−1) The questionnaire was only completed by a subset of participants (Control, n = 5; Exercise, n = 14) MET Metabolic Equivalent, MET-h MET hours, 1 MET 1 kcal·kg−1·h−1 Haemodynamic variables As examples of the variation in haemodynamic variables during different physical states, Table 3 shows the values of each of the haemodynamic variables during the supine posture (SUP), standing posture (STA) and supine-to-standing state change (ΔSupSta) for control and exercise groups at each of the pregnancy/postpartum stages. Figures 2, 3 and 4 show a selection of the haemodynamic variables (HR, SV, CO, TPR, SBP, DBP, EDI & CI) as functions of increasing gestation for control and exercise groups during the exercise stage (EXE), during the standing-to-exercise state change (ΔStaExe) and during the exercise-to-recovery state change (ΔExeRec). Haemodynamic variables (Mean ± SEM): (a) Supine posture (b) Standing posture and (c) Supine-to-standing between state change for control and exercise groups HR (bpm) SV (ml) CO (L · min−1) TPR (dyn · sec · cm−5) 1118.7 ± 53.0 SBP (mmHg) 109.0 ± 1.7 DBP (mmHg) EDI (ml · m−2) CI (L · min−1 · m−2) TPR (dyn.sec · cm−5) −0.13 ± 0.17 0.38 ± 0.12 5.9 ± 24.8 Haemodynamics for control and exercise groups during the 'exercise' state for antenatal and postpartum stages (*Statistically different from PP values, † statistically different from T1 values, ‡ statistically different from T3 values; all p < 0.05): (a) HR, (b) SV, (c) TPR, (d) SBP, (e) DBP, (f) EDI, (g) CI. Pairwise differences identified from post-hoc analysis are also displayed Haemodynamic responses during advancing gestation for control and exercise groups for the standing-to-exercise state change (* Statistically different from PP values, † statistically different from T1 values; all p < 0.05): (a) ΔHR, (b) ΔSV, (c) ΔCO, (d) ΔTPR, (e) ΔSBP, (f) ΔDBP, (g) ΔEDI, (h) ΔCI Haemodynamic responses during advancing gestation for control and exercise groups for the exercise-to-recovery change (*Statistically different from PP values, ‡ statistically different from T3 values; all p < 0.05): (a) ΔHR, (b) ΔSV, (c) ΔCO, (d) ΔTPR, (e) ΔSBP, (f) ΔDBP, (g) ΔEDI, (h) ΔCI. Pairwise differences identified from post-hoc analysis are also displayed On average (across all stages of pregnancy) ANOVA showed that Exercise Status influenced only TPREXE (p = 0.016), DBPSTA (p = 0.025) and DBPEXE (p = 0.028). A significant interaction (Pregnancy Stage x Exercise Status) effect was also observed for SBPEXE (p = 0.013), DBPEXE (p = 0.005), EDISTA (p = 0.010), EDIEXE (p = 0.021) and HRSUP (p = 0.014). The results of repeated measures ANOVA assessment of the influence of Pregnancy Stage on haemodynamic variables are presented in Table 4. Influence of Pregnancy Stage (T1-T3, PP) on haemodynamic variables Haemodynamic variable Physical state ΔSupSta ΔStaExe ΔExeRec p = 0.977 CO (L.min−1) TPR (dyn.sec.cm−5) EDI (ml.m−2) Separate ANOVA results are shown for each haemodynamic variable during six selected physical states or state-changes. ✓ = Statistical difference between pregnancy stage; X = No statistical difference between pregnancy stage, *p < 0.0005. Pairwise differences identified from post-hoc analysis are discussed in the text Considering the influence of Pregnancy Stage: HRSUP increased as pregnancy advanced from T2 onwards (p = 0.002, p = 0.003 respectively for T2 vs T1 and T3 vs T2). HRSUP also tended to be higher in the exercise group compared to the control group by late pregnancy (p = 0.071). HRSTA remained unchanged until T2, increasing by late pregnancy (p = 0.037). HREXE was also greater during late pregnancy in comparison to initial measurements (p = 0.002). HRSUP (p < 0.0005), HRSTA (p < 0.0005) and HREXE (p < 0.0005) were significantly lower postpartum (PP) than during any of the antenatal measurements. HRΔSupSta was significantly reduced by late pregnancy (p = 0.032), and was reduced at T2 (p = 0.003) and T3 (p < 0.0005) compared with PP. There was no influence of Pregnancy Stage on HRStaExe or HRExeRec. SVSUP remained unchanged until T2 and then decreased by late pregnancy (p < 0.0005). SVSTA was reduced PP in comparison to all antenatal measurements (p = 0.001, p < 0.0005, p = 0.005). There was no influence of gestation on SVEXE but SV EXE was greater in the exercise group than in the control group PP (p = 0.041). SVΔSupSta reduced progressively with advancing gestation (p = 0.015, p = 0.007 for T2 and T3 respectively), and was greater PP in comparison to both T2 (p = 0.024) and T3 (p < 0.0005). SVΔStaExe and SVΔExeRec were also greater PP compared to initial measurements (p < 0.0005 and p < 0.0005, respectively). COSUP was unchanged with gestation but was higher throughout the antenatal period compared with PP (p < 0.0005). COSTA was increased by T2 (p = 0.026) and then remained unchanged with advancing gestation. COEXE was significantly higher by T2 (p = 0.011) and T3 (p = 0.005) in comparison to initial measurements and all antenatal measurements were greater than PP (p < 0.0005). COΔSupSta was increased during late pregnancy compared to T1 (p = 0.004) and PP measurements (p = 0.008). COΔStaExe was greater by late pregnancy and COΔExeRec was greater PP compared to T1 (p = 0.036). Exercise status did not influence CO in any of the different physical states. TPRSUP remained unchanged until T2 and then increased during late pregnancy (p = 0.0002). TPRSUP (p < 0.0005, p < 0.0005, p = 0.001. for T1, T2 and T3 respectively), TPRSTA (p < 0.0005, all pregnancy stages) and TPREXE (p = 0.002, p < 0.0005, p < 0.0005) were all lower during pregnancy than PP. TPR EXE was lower in the exercise group than in the control group PP (p = 0.015). TPRΔStaExe (p = 0.036, p = 0.038, p = 0.050, respectively for T1, T2 and T3) and TPRΔExeRec (p < 0.0005, all pregnancy stages) were greater PP than during all antenatal measurements, with a trend towards a greater TPRΔExeRec response PP in the exercise group (p = 0.065). There was no influence of pregnancy stage on SBPSUP, SBPSTA, SBPΔSupSta or SBPΔStaExe. Compared with early pregnancy, SBPEXE (p = 0.022) and SBPΔExeRec (p = 0.049) were greater from T2 onwards. SBPEXE tended towards a reduction in the exercise group during T2 (p = 0.056) and PP (p = 0.063). In the exercise group, the SBPΔExeRec response changed from a reduction to an increase by PP (p = 0.012). There was no influence of pregnancy stage on DBPSUP, DBPSTA, DBPEXE, DBPΔSupSta, DBPΔStaExe and DBPΔExeRec. DBP EXE was significantly lower in the exercise group during T2 (p = 0.030) and PP (p = 0.007). As with SBPΔExeRec, the DBPΔExeRec response changed from a reduction to an increase by PP (p = 0.008). EDISUP progressively reduced as pregnancy advanced (p = 0.024, p = 0.001 respectively for T2 v T1 and T2 vs T3). EDISTA was unchanged until T2, after which it reduced (p = 0.006 for T3 vs T1). There was no influence of pregnancy stage on EDIEXE, however EDI EXE was significantly increased PP in the exercise group in comparison to the control group (p = 0.028). The EDIΔSupSta response changed from a reduction to an increase during T2 and T3 when compared to T1 (p = 0.027, p < 0.0005 respectively), with a similar pattern of change observed when compared to PP (p = 0.016, p = 0.001). EDIΔStaExe and EDIΔExeRec were greater PP than during T1 (p = 0.001, p < 0.0005 respectively), and both EDIΔStaExe and EDIΔExeRec tended to be greater PP in the exercise group (p = 0.054 and p = 0.073, respectively). (vii) CISUP increased after T2 with advancing gestation (p = 0.008, p = 0.001 respectively for T3 vs T1 and T3 vs T2). CISTA remained unchanged with advancing gestation but was significantly lower PP in comparison to all antenatal measurements (p < 0.0005). CIEXE increased until T2 (p = 0.041) and then plateaued until the end of pregnancy, and it was lower PP compared to all antenatal measurements (p < 0.0005). CIΔSupSta was increased during late pregnancy (T3) compared to T1 (p = 0.001), T2 (p = 0.016) and PP (p = 0.002). CIΔStaExe was increased by late pregnancy (p = 0.022) and CIΔExeRec was greater PP compared to T1 (p = 0.001). Exercise status did not influence CI in any of the different physical states. We found that women who had engaged in regular exercise during pregnancy displayed some additional haemodynamic changes compared with non-exercisers: (1) postpartum (PP) values of SVEXE and EDIEXE were greater in the exercise group (p = 0.041 and p = 0.028, respectively), (2) TPREXE and DBPEXE were lower in the exercise group postpartum (p = 0.015 and p = 0.007, respectively) and (3) DBPEXE was also lower in the exercise group during T2 (p = 0.030). Thus the main influence of antenatal exercise appears to be an improvement in a woman's haemodynamic function (enhanced ventricular ejection performance and reduced blood pressure) following the end of pregnancy. Irrespective of exercise status, pregnant women showed (1) unchanged supine cardiac output (CO) with advancing gestation but higher values during late pregnancy in all other physiological states, (2) increasing heart rate (HR) with advancing gestation (in supine, standing and exercise states) and lower postpartum values, and a lower HR response to standing during late pregnancy, (3) reduced supine stroke volume (SV) and reduced SV response to standing during late pregnancy, (4) increased supine vascular resistance during late pregnancy, and lower vascular resistance (supine, standing and exercise) during pregnancy compared with postpartum values, (5) reduced supine and standing end-diastolic volumes with advancing pregnancy, with an increased end-diastolic response to standing during late pregnancy and increased postpartum responses to exercise and recovery, and (6) increased cardiac index by late pregnancy in the supine posture and in response to standing. SV behaved as in previous reports, increasing until the start of the second trimester [4, 5, 29] and then either plateauing [8, 30, 31] or declining [3, 7, 9] towards the end of pregnancy. We observed a reduction in SV after T2, although the mechanism behind this change remains unclear. Similarly our observation of HR increasing with pregnancy and peaking in the third trimester are consistent with other studies [2, 7–9, 31, 32]. However, there were some notable differences in the behaviour of CO between our study and previous work. Typically CO has been found to increase during the first trimester [30, 33] and to plateau by the end of the second trimester [7, 30, 33]. An increase in CO of 1 L/min by the 8th week of gestation when compared to pre-conception has been observed [4], with 57 % of the total antenatal increase in CO occurring by 24 weeks' gestation (typically CO increases by 2.5–3 L.min−1 by late pregnancy). Other authors have reported a decline in CO after the 30th week of gestation [8, 9, 32, 34]. In common with us, the majority of these authors used impedance cardiography (ICG) to characterise haemodynamic profiles. It had been claimed that this paradoxical reduction in CO during late pregnancy (physiologically CO would not be expected to decline) reflected the poor technical performance of ICG during this time (anatomical changes in the thorax, due to the enlarging gravid uterus and alterations in maternal body composition were thought to alter the relative electrode configuration and thus directly degrade the ICG signal in pregnant women) [35]. However, as we have demonstrated here and previously [36], when measured in different physiological states (sitting, standing) a physiologically-consistent increase in CO is observed. During the postpartum period SVEXE and EDIEXE were greater whilst TPREXE and DBPEXE were reduced in women who had exercised. DBPEXE was also significantly lower in the exercise group during T2. Interestingly however, there were no between-group differences in haemodynamics when measured during the resting state. However, we did not record physical activity levels following pregnancy so we cannot assess whether this might have influenced our postpartum results. In non-pregnant women we would expect a 20-week exercise programme (as performed in our study) to elicit a reduction in resting HR and an increase in both resting SV and CO. However, neither Wolfe et al. (1999) [21] nor Stutzman et al. (2010) [37] found changes in resting HR in response to antenatal exercise (20-week aerobic cycling ergometry exercise programme and a 16-week antenatal walking programme, respectively). There was a suggestion of a continued increase in resting HR into late pregnancy in our exercise group but this was not statistically significant. It therefore appears that maternal HR does not respond to exercise training in the same manner as in non-pregnant women. These previous studies utilised low-to-medium intensity exercise programmes and since changes in physical fitness have a dose–response relationship [15] these may have been insufficient to provoke measurable haemodynamic and heart rate changes [21]. Also it had previously been speculated that an exercise programme beginning after conception would be of too short duration to result in significant haemodynamic change [20], whilst some authors have suggested that the normal physiological changes of pregnancy might be sufficiently dominant to negate the influence of light-to-moderate exercise training on heart rate [21]. Antenatal exercise conditioning has however previously been associated with alterations in the patterns of change in resting HR and SV with advancing gestation, with values peaking in T3 for exercising women and during T2 for controls [21]. Our study did not confirm these findings (we saw similar patterns for both groups) although differences in training protocol could account for this. In Wolfe et al.'s study, participants performed cycle ergometry exercise on 3 days each week at 75 % of age-predicted maximum for 14–25 min, and cardiovascular measurements were recorded at 17, 27 and 37 weeks gestation and postpartum (generally similar to our study). The present study has extended previous findings to look at the postpartum influence of antenatal exercise on maternal physiology. We observed no difference in resting haemodynamic values during the postpartum period, unlike other authors who reported a reduction in resting HR and an increase in CO in trained compared with untrained women [22]. Wolfe et al. [21] used their postpartum resting measurements only as non-pregnant reference values (for comparison with antenatal measurements) and did not directly compare exercise and control groups post-pregnancy. However, during an acute bout of exercise we observed increases in SV and EDI and reductions in TPR and DBP in the exercise group, suggesting that antenatal exercise improves exercise efficiency during the postpartum period. Interestingly, by late pregnancy the responses to an acute bout of exercise were identical to those of the control group. It would be interesting to investigate if starting an exercise programme at an earlier stage of pregnancy (prior to 12 weeks) might alter the acute response to exercise by mid-pregnancy, and if it might result in more significant haemodynamic changes above those occurring during normal gestation. In the present study, participants did not start exercising until 20 weeks gestation, a time point at which significant haemodynamic adaptation to pregnancy had already occurred. Altering the maternal haemodynamic profile at an earlier stage of pregnancy might be advantageous in reducing the risk of pregnancy-induced diseases such as gestational hypertension and pre-eclampsia. Similarly it would be of value learn to learn whether exercise still has an advantageous effect postpartum if women stop exercising at the start of the third trimester, or whether exercising during this stage of pregnancy is crucial for post-birth adaptations in maternal fitness. Altering the maternal responses to physical exercise during the postpartum period might be beneficial for mothers in reducing fatigue and improving overall well-being. In particular, from a clinical perspective enhancing the efficiency of exercise during the postpartum period might have a role in protecting women against postpartum cardiomyopathy. Although this remains speculative, such questions could be addressed with larger prospective studies. Future work might also look to examine the influence of continuing exercise during the postpartum period on maternal haemodynamics, and whether it alters the relative rate at which values return to 'normal'. We also characterised the between-state changes from standing-to-exercise and exercise-to-recovery. SBP and DBP were significantly altered in the Exercise group when changing from exercise-to-recovery in the postpartum period. Although we cannot comment on the direct significance that this might have on maternal physiology, we speculate that exercise conditioning during pregnancy alters the autonomic nervous system response to these state changes, particularly in terms of blood pressure regulation. However, these findings are based on the overall average for each physiological state (i.e. an average taken over the entire 6 min recording period). We are therefore unable to see the immediate rate of change in blood pressure response post-exercise. In future, it might be useful to look at the beat-to-beat changes in blood pressure to better characterise the dynamic influence that antenatal exercise has on blood pressure control. We observed an alteration in cardiovascular response as a result of weekly low-intensity exercise. We are aware that this exercise prescription is below current guidelines for pregnancy, which suggest that previously sedentary women should begin with 15 min of continuous exercise three times a week, increasing gradually to 30 min four times a week and then daily [16]. However, levels of sedentary behaviour are high (particularly in Wales) with 36 % of individuals admitting to performing no weekly exercise [38]. This suggests that a substantial proportion of pregnant women would be unlikely to engage in physical exercise. Women who don't want to commit to the recommended levels might decide to not exercise at all. If weekly exercise during pregnancy is proven to be sufficient to provide a health benefit then we would argue that more women are likely to engage in this lower level of commitment to antenatal exercise. We speculate that pregnancy might even be used as an opportunity to foster an ethos of exercise amongst previously sedentary women, which might therefore alter behaviours for the rest of their lives. Our study provides a comprehensive characterisation of haemodynamic responses utilising a randomised control design. The only previous study to implement a controlled experimental design to assess haemodynamic changes during pregnancy in response to antenatal exercise permitted women to choose the group to which they wished to be assigned (exercise or control), thus potentially biasing the outcomes of their study [21]. Our study design enabled a randomised control trial to be performed but still allowed the pregnant women the freedom to choose the intervention group to which they preferred to belong. This flexible randomisation approach therefore encouraged participation amongst women of all physical abilities and minimised the influence of bias on the outcomes of our study. RCOG: Royal college of obstetricians and gynaecologists T1: PPAQ: Pregnancy physical activity questionnaire RR: Cardiac interval SBP: Systolic blood pressure DBP: Diastolic blood pressure TPR: LVET: Left ventricular ejection time EDI: End diastolic index ΔSupSta: Supine-to-standing state changes ΔStaExe: Standing-to-exercise state changes ΔExeRec: Exercise-to-recovery state changes ICG: Impedance cardiography Cardiac index We would like to thank Afia Ali (midwife) for her help with both recruitment and running the exercise classes, and to the other midwifery staff at Singleton Hospital for both their cooperation during participant recruitment and their valuable contribution to the conduct of the research. The work carried out in this study was supported by a Welsh Government NISCHR (National Institute for Social Care and Health Research) Health Studentship (grant number: HS/10/07) and The Cooperative Pharmacy (UK) provided financial support for project consumables. The authors declare that they have no competing interests. The authors alone are responsible for the content and writing of the paper. REC and LAD were responsible for recruitment of participants and data collection. All authors were responsible for the design of the study. REC and MJL performed the statistical analysis and drafted the original manuscript. SJE and OU also provided clinical interpretation and perspective. All authors contributed to the interpretation and discussion of the manuscript. All authors read and approved the final manuscript. College of Engineering, Swansea University, Talbot Building, Singleton Park, Swansea, SA2 8PP, UK Department of Gynaecology, Singleton Hospital, Sketty Lane, Sketty, Swansea, SA2 8QA, UK Department of Paediatric Cardiology, University Hospital of Wales, Heath Park, Cardiff, CF14 4XW, UK College of Medicine, Swansea University, Talbot Building, Singleton Park, Swansea, SA2 8PP, UK Gilson GJ, Samaan S, Crawford MH, Qualls CR, Curet LB. Changes in hemodynamics, ventricular remodeling, and ventricular contractility during normal pregnancy: a longitudinal study. Obstet Gynecol. 1997;89(6):957–62.View ArticlePubMedGoogle Scholar Mabie WC, DiSessa TG, Crocker LG, Sibai BM, Arheart KL. A longitudinal study of cardiac output in normal human pregnancy. Am J Obstet Gynecol. 1994;170(3):849–56.View ArticlePubMedGoogle Scholar Flo K, Wilsgaard T, Vårtun A, Acharya G. A longitudinal study of the relationship between maternal cardiac output measured by impedance cardiography and uterine artery blood flow in the second half of pregnancy. BJOG. 2010;117(7):837–44.View ArticlePubMedGoogle Scholar Capeless EL, Clapp JF. Cardiovascular changes in early phase of pregnancy. Am J Obstet Gynecol. 1989;161(6 Pt 1):1449–53.View ArticlePubMedGoogle Scholar Clapp JF, Capeless E. Cardiovascular function before, during, and after the first and subsequent pregnancies. Am J Cardiol. 1997;80(11):1469–73.View ArticlePubMedGoogle Scholar Mahendru AA, Everett TR, Wilkinson IB, Lees CC, McEniery CM. A longitudinal study of maternal cardiovascular function from preconception to the postpartum period. J Hypertens. 2014;32(4):849–56.View ArticlePubMedGoogle Scholar Volman MN, Rep A, Kadzinska I, Berkhof J, van Geijn HP, Heethaar RM, et al. Haemodynamic changes in the second half of pregnancy: a longitudinal, noninvasive study with thoracic electrical bioimpedance. BJOG. 2007;114(5):576–81.View ArticlePubMedGoogle Scholar Moertl MG, Ulrich D, Pickel KI, Klaritsch P, Schaffer M, Flotzinger D, et al. Changes in haemodynamic and autonomous nervous system parameters measured non-invasively throughout normal pregnancy. Eur J Obstet Gynecol Reprod Biol. 2009;144 Suppl 1:S179–83.View ArticlePubMedGoogle Scholar San-Frutos L, Engels V, Zapardiel I, Perez-Medina T, Almagro-Martinez J, Fernandez R, et al. Hemodynamic changes during pregnancy and postpartum: a prospective study using thoracic electrical bioimpedance. J Matern Fetal Neonatal Med. 2011;24(11):1333–40.View ArticlePubMedGoogle Scholar Heiskanen N, Saarelainen H, Valtonen P, Lyyra-Laitinen T, Laitinen T, Vanninen E, et al. Blood pressure and heart rate variability analysis of orthostatic challenge in normal human pregnancies. Clin Physiol Funct Imaging. 2008;28(6):384–90.View ArticlePubMedGoogle Scholar Poppas A, Shroff SG, Korcarz CE, Hibbard JU, Berger DS, Lindheimer MD, et al. Serial assessment of the cardiovascular system in normal pregnancy. Role of arterial compliance and pulsatile arterial load. Circulation. 1997;95(10):2407–15.View ArticlePubMedGoogle Scholar Mesa A, Jessurun C, Hernandez A, Adam K, Brown D, Vaughn WK, et al. Left ventricular diastolic function in normal human pregnancy. Circulation. 1999;99(4):511–7.View ArticlePubMedGoogle Scholar Kispert CP, Nielsen DH. Normal cardiopulmonary responses to acute- and chronic-strengthening and endurance exercises. Phys Ther. 1985;65(12):1828–31.PubMedGoogle Scholar Plowman S, Smith D. Exercise physiology for health, fitness and performance. 2nd ed. Philadelphia: Lippincott Williams & Wilkins; 2008.Google Scholar American College of Sports Medicine Position Stand. The recommended quantity and quality of exercise for developing and maintaining cardiorespiratory and muscular fitness, and flexibility in healthy adults. Med Sci Sports Exerc. 1998;30(6):975–91.Google Scholar Royal College of Obstetricians and Gynaecologists. Exercise in pregnancy. RCOG Statement No. 4. 2006. https://www.rcog.org.uk/globalassets/documents/guidelines/statements/statement-no-4.pdf. Accessed 19 August 2015. Ueland K, Novy MJ, Peterson EN, Metcalfe J. Maternal cardiovascular dynamics. IV. The influence of gestational age on the maternal cardiovascular response to posture and exercise. Am J Obstet Gynecol. 1969;104(6):856–64.View ArticlePubMedGoogle Scholar Pivarnik JM, Lee W, Clark SL, Cotton DB, Spillman HT, Miller JF. Cardiac output responses of primigravid women during exercise determined by the direct Fick technique. Obstet Gynecol. 1990;75(6):954–9.PubMedGoogle Scholar Sady MA, Haydon BB, Sady SP, Carpenter MW, Thompson PD, Coustan DR. Cardiovascular response to maximal cycle exercise during pregnancy and at two and seven months post partum. Am J Obstet Gynecol. 1990;162(5):1181–5.View ArticlePubMedGoogle Scholar Pivarnik JM. Cardiovascular responses to aerobic exercise during pregnancy and postpartum. Semin Perinatol. 1996;20(4):242–9.View ArticlePubMedGoogle Scholar Wolfe LA, Preston RJ, Burggraf GW, McGrath MJ. Effects of pregnancy and chronic exercise on maternal cardiac structure and function. Can J Physiol Pharmacol. 1999;77(11):909–17.View ArticlePubMedGoogle Scholar Morton MJ, Paul MS, Campos GR, Hart MV, Metcalfe J. Exercise dynamics in late gestation: effects of physical training. Am J Obstet Gynecol. 1985;152(1):91–7.View ArticlePubMedGoogle Scholar Drummond N, Abdalla M, Beattie JAG, Buckingham JK, Lindsay T, Osman LM, et al. Effectiveness of routine self monitoring of peak flow in patients with asthma. BMJ. 1994;308:559–64.View ArticleGoogle Scholar Borg GA. Perceived exertion: a note on "history" and methods. Med Sci Sports. 1973;5(2):90–3.PubMedGoogle Scholar Paxton A, Lederman SA, Heymsfield SB, Wang J, Thornton JC, Pierson RN. Anthropometric equations for studying body fat in pregnant women. Am J Clin Nutr. 1998;67:104–10.PubMedGoogle Scholar Chasan-Taber L, Schmidt MD, Roberts DE, Hosmer D, Markenson G, Freedson PS. Development and validation of a Pregnancy Physical Activity Questionnaire. Med Sci Sports Exerc. 2004;36(10):1750–60.View ArticlePubMedGoogle Scholar Fortin J, Haitchi G, Bojic A, Habenbacher W, Grullenberger R, Heller A, et al. Validation and verification of the task force® monitor. Results of Clinical Studies for FDA. 2001;510:1–7.Google Scholar Fortin J, Habenbacher W, Heller A, Hacker A, Grüllenberger R, Innerhofer J, et al. Non-invasive beat-to-beat cardiac output monitoring by an improved method of transthoracic bioimpedance measurement. Comput Biol Med. 2006;36(11):1185–203.View ArticlePubMedGoogle Scholar Clapp JF, Seaward BL, Sleamaker RH, Hiser J. Maternal physiologic adaptations to early human pregnancy. Am J Obstet Gynecol. 1988;159(6):1456–60.View ArticlePubMedGoogle Scholar Spätling L, Fallenstein F, Huch A, Huch R, Rooth G. The variability of cardiopulmonary adaptation to pregnancy at rest and during exercise. Br J Obstet Gynaecol. 1992;99 Suppl 8:1–40.PubMedGoogle Scholar Clark SL, Cotton DB, Lee W, Bishop C, Hill T, Southwick J, et al. Central hemodynamic assessment of normal term pregnancy. Am J Obstet Gynecol. 1989;161(6 Pt 1):1439–42.View ArticlePubMedGoogle Scholar van Oppen AC, van der Tweel I, Alsbach GP, Heethaar RM, Bruinse HW. A longitudinal study of maternal hemodynamics during normal pregnancy. Obstet Gynecol. 1996;88(1):40–6.View ArticlePubMedGoogle Scholar Duvekot JJ, Cheriex EC, Pieters FA, Menheere PP, Peeters LH. Early pregnancy changes in hemodynamics and volume homeostasis are consecutive adjustments triggered by a primary fall in systemic vascular tone. Am J Obstet Gynecol. 1993;169(6):1382–92.View ArticlePubMedGoogle Scholar Hennessy TG, MacDonald D, Hennessy MS, Maguire M, Blake S, McCann HA, et al. Serial changes in cardiac output during normal pregnancy: a Doppler ultrasound study. Eur J Obstet Gynecol Reprod Biol. 1996;70(2):117–22.View ArticlePubMedGoogle Scholar Moertl MG, Schlembach D, Papousek I, Hinghofer-Szalkay H, Weiss EM, Lang U, et al. Hemodynamic evaluation in pregnancy: limitations of impedance cardiography. Physiol Meas. 2012;33(6):1015–26.View ArticlePubMedGoogle Scholar D'Silva LA, Davies RE, Emery SJ, Lewis MJ. Influence of somatic state on cardiovascular measurements in pregnancy. Physiol Meas. 2014;35(1):15–29.View ArticlePubMedGoogle Scholar Stutzman SS, Brown CA, Hains SM, Godwin M, Smith GN, Parlow JL, et al. The effects of exercise conditioning in normal and overweight pregnant women on blood pressure and heart rate variability. Biol Res Nurs. 2010;12(2):137–48.View ArticlePubMedGoogle Scholar Welsh Government. Welsh health survey 2013. 2014. http://wales.gov.uk/docs/statistics/2014/140930-welsh-health-survey-2013-en.pdf. Accessed 22 2014.Google Scholar
CommonCrawl
Lumerical Support > APP home CMOS - Electrical simulation methodology CHARGE CMOS Image Sensors Consumer Electronics The cost of CMOS image sensor pixel-based digital camera systems is being reduced through the use of smaller pixel sizes and larger fill-factors. However, CMOS pixel size reduction is only acceptable without sacrificing image quality. As CMOS pixel sizes continue to decrease, there is a reduction in image signal to noise as well as an increase in cross-talk between adjacent sensor pixels. These effects can be offset by careful design optimization through computer simulation which, at current pixel dimensions, requires a comprehensive solution involving both optical and electrical analysis. In this topic we discuss the trends in CMOS image sensors, the implications for simulation, the types of results that can be simulated and describe the full simulation methodology to achieve them. Pixel Operation A schematic of the 4T APS is shown in the following figure. In the figure, a cross-section of the active sensing region (the pinned photodiode p++ and the buried n-well) and the transfer gate are illustrated. The drain contact (SENSE) of the TX transistor can be electrically isolated, and is often termed the floating diffusion (FD). The APS can be reset by applying a pulse to RST, which will pull the FD to VDD, emptying the detector of charge and establishing an initial bias condition on that node. The FD is also connected to a common-drain amplifier, which is isolated from the column bus by the row select (RS) transistor. In general terms, the pixel operates by collecting photo-generated charge. The photodetector is illuminated, and charge is collected in the n-well. At the end of the exposure, a pulse is applied to the transfer gate (TX), lowering the n-well barrier, and allowing the charge to move to the sense node. This charge is converted to a voltage on that node by the intrinsic capacitance of the amplifier gate and surrounding metal (V = Q/C). The voltage on the floating diffusion is read out to the column bus through the amplifier and RS gate. A pinned-photodiode 4T APS. A cross-section of the pinned photodiode and transfer gate is shown connected to the floating diffusion (SENSE). Dark Current Two key figures of merit factor into the dark current calculation. The APS acts as a charge conversion device, representing the number of electrons generated as a voltage at the output of the amplifier. This ratio is termed the conversion gain and is commonly measured in [uV/e-]. The conversion gain can be expressed as a ratio of the electron charge to a capacitance CCG: $$C G=\frac{q}{C_{c_{\mathrm{g}}}}$$ A second important performance metric is the dark count itself, which is the number of charges generated during an exposure period with no illumination. The dark count is often reported as the digital number (DN) which also includes the gain and non-linearity of the ADC. At the pixel level, the dark-current itself will indicate rate of charge generation, which can be translated into a count if the exposure time is known. The dark current is an important source of both fixed pattern and temporal noise in a CMOS image sensor (CIS) pixel. In the following example, a pinned-photodiode active pixel sensor (APS) is simulated to determine the dark current density. Sources of Dark current The dark current will be measured when the APS is in its reset state with no applied illumination. Consequently, the charge accumulation well will be depleted, and the internal PN junctions will be reverse biased. Multiple physical processes contribute to the recombination and generation of electrical carriers (electrons and holes), including : Trap-assisted (Shockley-Read-Hall) recombination Auger recombination Radiative (direct) recombination Surface (trap-assisted) recombination Each of these processes can be accounted for using the material models included in CHARGE. For more information on the material database and parameters, please see the user guide entry on the material database. Of these mechanisms, the dominant mechanisms for charge recombination and generation in silicon are the trap-assisted processes (bulk and surface), which are illustrated in the following figure. Sources of dark current. The three dominant mechanisms that give rise to the dark current are illustrated: (a) surface generation from trap states at the Si/SiO2 interface, (b) trap-assisted thermalgeneration of charge in the space charge layer, and (c) the diffusion current due to thermal generation of charge in the bulk In the bulk case (Shockley-Read-Hall or SRH), the following formula describes the recombination rate: $$ R_{S R H}=\frac{n p-n_{i}^{2}}{\left(n+n_{1}\right) \tau_{p}+\left(p+p_{1}\right) \tau_{n}} $$ where n and p are the electron and hole densities, respectively, n1 and p1 are related to the energetic location of the trap state (typically close to mid-gap), and τn and τp are the electron and hole carrier lifetimes, respectively. The formula for the surface recombination rate is similar, but the carrier lifetimes are replaced by an inverse surface recombination velocity. Under reverse-bias conditions, the space charge layer of the PN junction broadens and is depleted of carriers, such that $$n, p \ll {n_{i}}$$ Under these conditions and assuming that: $$n_{1}, p_{1} \approx n_{i}$$ which is true if the trap state energy is at the intrinsic energy level, the recombination rate in the space charge layer becomes: $$ R_{S R H} \approx-\frac{n_{i}^{2}}{n_{1} \tau_{p}+p_{1} \tau_{n}} \approx-\frac{n_{i}}{\tau_{p}+\tau_{n}}=-\frac{n_{i}}{\tau_{g}} $$ and the negative sign indicates that the recombination rate has become a generation rate: $$G=n_{i} / \tau_{g} $$ When a region of the bulk or surface is depleted by an applied voltage, it will become a source of charge, such that its current density could be described as $$ J_{d a v k}=q W G_{t o t} $$ where W is the width of the space charge layer (SCL) in reverse bias. Because minority carriers are swept through the SCL by the reverse bias field, their densities at the edge of the SCL will be reduced below their equilibrium concentrations. Charges generated through the same bulk process as described above are subject to diffusion, and may move along the density gradient to the depleted edge of the SCL. The diffusion current will depend on the diffusion length: the average distance that a carrier will travel before recombining. The diffusion length in turn depends on the carrier lifetime. A simple 1D model of this process gives a diffusion current (assuming a p-type neutral region): $$ J_{d i f f}=q \sqrt{\frac{D_{n}}{\tau_{n}}} \frac{n_{i}^{2}}{N_{A}} $$ where \(D_{n}\) is the diffusivity of the electrons in silicon and \(N_{A}\) is the p-type doping concentration \( N_{A} \gg N_{D} \) Note that the diffusion current is proportional to \(n_{i}^{2}\) To minimize the dark current, three approaches can be taken: minimize the depleted area, minimize the width of the space charge layer, or maximize the carrier lifetime or minimize the surface recombination velocity. The third approach is typically a characteristic of the processing, and cannot be modified. However, typical carrier lifetimes in crystalline silicon can be very large (10s of microseconds or more). In the bulk of the device the depleted area will depend on the implant well design and distribution, which requires a tradeoff with capacity. The width of the SCL can be reduced by increasing the doping concentration to the extent that the carrier lifetime remains insensitive to that concentration. The surface interface between the silicon and surrounding oxide poses a greater challenge. To mitigate the effect of charge generated at surface trap states, designers will look to minimize Si-SiO2 surfaces exposed to strong electric fields. In the case of the pinned-photodiode APS, this is naturally achieved through the heavy diffusion doping required to form the pinning layer. The critical region in APS design is then the path underneath the gate, which is typically lightly doped and exposed to strong electric fields. Angular Response Cross-talk can be introduced both optically and electrically. Due to the wave nature of the optical input, imperfect color filtering, and alignment mismatch in the optical stack, some light will bleed into neighbouring sub-pixels, generating charge in the silicon. Additionally, charge generated from light absorbed in the target sub-pixel may also diffuse into neighbouring sub-pixels and be collected by an adjacent well. For a description of the experimental setup and optical simulation of spectral cross-talk, please refer to angular response 2D In the preceding examples, the setup for the FDTD simulation is described. Two methods can be used to estimate the collected charge directly from the optical simulation. First, the power flux density through the active (or inactive) sub-pixel surface can be used to estimate the generated charge. This method ignores the light that may be absorbed below an adjacent sub-pixel, generating charge that would be collected by a sub-pixel other than the target. The second method assumes that charge generated in a certain region (a depletion region) will be collected. While this approach accounts for the distribution of absorbed light in the substrate, the size and shape of the depletion layers must be determined empirically. To accurately determine the angular response of the system, the absorbed light, which generates charge, can be used as a source in an electrical simulation with CHARGE. In the electrical simulation, the depletion regions are defined implicitly by the distribution of dopants and the applied bias. In addition, physical processes that contribute to the generation and recombination of charge due to impurities are also accounted for. By combining the optical and electrical simulation, a complete picture of the angular response can be obtained. The angular response can be related according to the following definitions of efficiency. Optical Efficiency The optical efficiency (OE) is the ratio of absorbed photons to incident photons, and is unitless. The number of absorbed photons can be calculated either from the integration of the power flux density (Poynting vector) through the pixel surface, or through the integral of the absorbed power in the substrate. The first method gives an accurate picture of the absorbed power without requiring any assumptions about the underlying volume. Please see the description of the angular response 3D with FDTD for more details. Internal Quantum Efficiency The internal quantum efficiency (IQE) is the ratio of collected charge (number) to the number of incident photons, and is unitless. The IQE can be calculated knowing the source intensity and OE. In the case of monochromatic sources, $$ I Q E=\frac{h c}{q \lambda} \frac{I_{2}}{\int_{a \Omega_{d}} P_{a b s}(\mathbf{r}) d \mathbf{r}} $$ where Iλ is the current measured in the active channel. The integral can be calculated from the OE: where Sin is the input source intensity in W/m2 and A is the surface area of the sub-pixel. Note that we scale the OE by the maximum possible OE for that sub-pixel (e.g. 25% for a pixel with four sub-pixels). External Quantum Efficiency Assuming that each absorbed photon generates an electron hole pair, the external quantum efficiency (EQE) is simply the product of the OE and IEQ: EQE = OE x IQE. The EQE is the ratio of charge collected to total incident photons, and accounts for both optical and electrical losses. For a monochromatic source, the EQE can be converted into a responsivity, $$ R=\frac{q \lambda}{h c} \times E Q E $$ which is measured in A/W. This is often more convenient when characterizing a photodetector. Transient Response In the overview of the APS behaviour, we observed that the gate of the common-drain amplifier and connecting metals appears as a capacitance to the floating diffusion node. This allows us to simplify the circuit model for the sub-pixel, as shown in the following schematic. The transient operation of the image sensor proceeds as follows: The image sensor is initialized to its reset state, with the photo-detector depleted of charge, and an initial voltage set on the capacitance CCG. The n-well is isolated from the floating diffusion when the transfer gate (TX) is switch off The floating diffusion is isolated from VDD when the transistor RST is placed in a high-impedance state. The photo-detector is illuminated for an exposure period T The charge collected in the n-well is transferred to the capacitor with a pulse applied to the transfer gate (TX). The capacitor converts the accumulated charge into a voltage signal, which acts as an input to the amplifier. These steps are illustrated in the waveform below The impedance of the RST transistor can be modeled with a switched impedance (resistance) in series with the supply voltage source. The contact models in device support the simulation of series and shunt resistances and capacitances, which can be used to model the electrical characteristics of the floating diffusion connected to the drain of the transfer gate (TX). A schematic of the contact circuit used in the simulation is shown in the adjacent figure. By changing the series resistance from a low impedance to high impedance state, the behaviour of the RST transistor can be adequately modeled. Doping Profile Often, the structure and doping profile for the image sensor pixel has been simulated externally using process simulation software. The structure and doping profile can then be imported into the DEVICE layout environment using the externally generated finite element doping data set (for more information, please see the topics on reading data from HDF5 sources and the script commands for doping import and structure extraction). In this scenario, it is important to verify that structure and material assignments are correct, particularly if additional operations were performed (e.g. mirroring or change of axis). F. Hirigoyen, A. Crocherie, J. M. Vaillant, and Y. Cazaux, "FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization" Proc. SPIE 6816, 681609 (2008) Wang, Xinyang, "Noise in Sub-Micron CMOS Image Sensors", Ph.D. Thesis, Delft University of Technology CMOS image sensors - list of examples CMOS - Green's function IQE method CMOS - Optical simulation methodology CMOS - Photoelectric conversion CMOS - Transient electrical response
CommonCrawl
Analysis and visualization of disease courses in a semantically-enabled cancer registry Angel Esteban-Gil1, Jesualdo Tomás Fernández-Breis ORCID: orcid.org/0000-0002-7558-28802 & Martin Boeker3 Journal of Biomedical Semantics volume 8, Article number: 46 (2017) Cite this article Regional and epidemiological cancer registries are important for cancer research and the quality management of cancer treatment. Many technological solutions are available to collect and analyse data for cancer registries nowadays. However, the lack of a well-defined common semantic model is a problem when user-defined analyses and data linking to external resources are required. The objectives of this study are: (1) design of a semantic model for local cancer registries; (2) development of a semantically-enabled cancer registry based on this model; and (3) semantic exploitation of the cancer registry for analysing and visualising disease courses. Our proposal is based on our previous results and experience working with semantic technologies. Data stored in a cancer registry database were transformed into RDF employing a process driven by OWL ontologies. The semantic representation of the data was then processed to extract semantic patient profiles, which were exploited by means of SPARQL queries to identify groups of similar patients and to analyse the disease timelines of patients. Based on the requirements analysis, we have produced a draft of an ontology that models the semantics of a local cancer registry in a pragmatic extensible way. We have implemented a Semantic Web platform that allows transforming and storing data from cancer registries in RDF. This platform also permits users to formulate incremental user-defined queries through a graphical user interface. The query results can be displayed in several customisable ways. The complex disease timelines of individual patients can be clearly represented. Different events, e.g. different therapies and disease courses, are presented according to their temporal and causal relations. The presented platform is an example of the parallel development of ontologies and applications that take advantage of semantic web technologies in the medical field. The semantic structure of the representation renders it easy to analyse key figures of the patients and their evolution at different granularity levels. Cancer registries are an important part of the health information systems in local and regional health organizations. Regional and epidemiological cancer registries are the foundation for cancer research and the quality management of cancer treatment. In most developed countries, the operation and the sampling of data in cancer registries are statutory. Cancer registries are complex structures for the documentation and analysis of data from patients diagnosed with cancer [1, 2]. Different types of cancer registries collect patient data from institutions (institutional), regions (regional) or complete larger areas (epidemiological). Whereas epidemiological registries provide mainly population-based information on morbidity and mortality, institutional and regional registries can provide fine-grained information on treatment and conditional survival. The information of regional cancer registries serves different requirements such as the quality control of patient care, the comparison of patient-related outcome parameters and research support. Institutional and regional registries are also the main data source for epidemiological cancer registries. Regional cancer registries collect information about diagnosis, therapies and course of the disease [3], the most important being the histopathology of the primary tumor, including tumor staging and grading. The long-term follow-up of the patients' vital status is one of the resource-intensive tasks of tumor registries providing the basis for survival analysis. Different software cancer registries solutions are currently available, such as METRIQ1, OncoLog Registry2 or CNEXT3. The standardisation of the cancer registry software is difficult because of a large set of rapidly changing legal and scientific requirements. Most of these software solutions suffer from two main limitations. The interoperability with other health applications such as Electronic Medical Records (EMRs) is limited, which is a typical problem of clinical information systems [4]. The heterogeneity of the underlying data models is a consequence of the difference between data models in current cancer registry software [5, 6]. This imposes severe limitations on research and on the progress of cancer studies when clinical research activities need to integrate data from different cancer registries of several regions. There have been proposals to overcome the afore mentioned problems. In [7] the authors use the Unified Modelling Language for modeling cancer registry processes in a hospital. In [8] the authors propose a set of indicators to evaluate specific quality measures in cancer care, and [9] attempts to optimise cancer registries by means of knowledge-based systems for monitoring patient records. Unfortunately, these approaches do not guarantee the generation of standard models and do not provide satisfactory solutions to scenarios which require customisable, comparative analyses and data linking to external resources [5]. On the technical side, the Semantic Web stack can be employed to provide information with given well-defined meaning, better enabling computers and people to work in cooperation [10]. Ontologies [11] constitute the standard knowledge representation mechanism for the Semantic Web, in which languages such as the Web Ontology Language (OWL) enable a formal representation of the domain of interest. Important international initiatives [12, 13] strive to ensure that the Semantic Web becomes a fundamental system to achieve consistent and meaningful representation, access, interpretation and exchange of clinical data. These semantic web technologies have already been used to represent cancer diseases, e.g. in [14], an ontology models clinic-genomic cancer trials. Ontologies were also proposed to represent certain types of cancer disease [15, 16]. The main objective of this study is the development of a Semantic Web platform that facilitates the analysis and visualisation of data from cancer registries including (1) the representation of the disease course of a patient, (2) the representation of the aggregated disease courses of a group of patients, and (3) the definition of customisable dashboards for patient selection and visualisation of the data. The use of simulated data demonstrates the viability of incorporating a local cancer registry into this model. A comparative performance analysis of relational databases and semantic repositories demonstrates excellent performance measures for the semantic repository. Standards and classification systems in cancer registries Most information contained in cancer registries is derived from primary care interactions. For the purpose of structured secondary documentation, tumor documentaries carefully reprocess primary documentation. In many countries, a standardised common dataset has been developed to better support exhaustive data exchange with the epidemiological cancer registries, proposing the classification of diagnostic and treatment information with clinical coding systems. The most important clinical classification system applied in cancer registries is the International Classification of Diseases version 10 (ICD-10) [17]. This classification system is divided in chapters, with blocks of diseases. For example, chapter II includes the classification for neoplasms between the blocks C00 and D48. These blocks are subdivided in hierarchies that further specify the diagnosis. The ICD-O is a domain-specific extension of ICD for cancer diseases. ICD-O is a dual classification allowing the coding of topography (tumor site) and tumor morphology. SNOMED CT [18] has adopted ICD-O codes for the classification of tumor morphology. Several staging systems for cancer have evolved over time and continue to evolve with scientific progress. The most important classification system is the Classification of Malignant Tumours (TNM) [19], which is related to the description of the anatomical extent of the disease. This system is under constant development by the Union for International Cancer Control and the American Joint Committee on Cancer. The TNM staging is based on the size or the extent of the primary tumor, the metastases in regional lymph nodes, and the presence of metastasis or secondary tumors formed by the spread of cancer cells to other parts of the body. Clinical procedures are also encoded with coding systems such as the ICD10-PCS (Procedure Coding System) [20] denoting aspects such as the clinical classification of the procedure, the surgical section or the body system. Visualisation of clinical records From the emergence of the electronic medical record (EMR), the amount of data has increased exponentially [21, 22]. The main objective of the EMR is representing the clinical characteristics of a patient from several perspectives. For a variety of reasons [23] this objective has not yet been achieved. Visualisation methods are one way of facilitating the representation and flexible exploitation of EMR data. According to [24] there are two types of visualisation of EMR data: Multimedia visualisation includes video, audio, graphical plots, rich text, hyperlinks and other multimedia contents [25, 26]. Temporal visualisation depicts clinical timelines of the health state of the patient [27, 28]. Some of these representations are able to generate a prospective of the future clinical characteristics of the patient using data mining techniques over all the EMR [29, 30]. The TimeLine project [24] combines the two approaches with four key aspects of the user interface: demographics and encounter information, medical problem list, graphical timelines and the data viewer that allows the navigation over all data of the patient as bone scan, laboratory data, etc. The main advantage of this project is that the clinician can visualise all patient data without switching between various information systems. Semantic exploitation of data Semantic representation The methods for the transformation and semantic representation of information follow similar approaches. They can be classified in (1) those which generate a representation of the datasets in semantic formats being the result of the application of mappings between the entry data source and the ontology that provides the meaning for the content; and (2) those which permit ontology-based data access using data in traditional formats but querying with semantic web query languages. Next, we describe the most popular approaches and tools from both categories: D2RQ (Accessing Relational Databases as Virtual RDF Graphs) allows to query data stored in relational databases using SPARQL on virtual RDF graphs [31]. This tool is totally automatic. Triplify allows to publish [32] the content of relational databases as Linked Data [33] based on a partially automatic transformation process. Linked Data Views (Virtuoso). OpenLink Virtuoso [34] is a database management system that handles several persistence models (relational, XML, object-relational, virtual and RDF). Persistence models stored in Virtuoso can be queried with SPARQL based on the automatic representation as Linked Data Views [35]. XS2OWL (Representation of XML Schemas in OWL syntax). XML schemas can be transformed into OWL [36]. XML databases can be automatically transformed and queried with SPARQL. RDB2OWL (A Database-to-Ontology Mapping Language and Tool). Approach to transform the data stored in relational databases into RDF or OWL [37]. The user manually defines mappings between the entries and the outputs. The transforming of large ontologies can be tedious. Karma. It links a source model to ontologies to generate a semantic representation of the data source [38]. This process is partially automatic. Populous. Assistant for building ontologies [39], the process being guided by patterns. Populous is able to import CSV data. SWIT (Semantic Web Integration Tool). Semantic transformation engine capable of generating RDF and OWL repositories from both relational and XML databases [40]. Besides transforming the data, SWIT prevents the generation of logically inconsistent data with the support of DL reasoners. The transformation method has three main steps: (1) definition of the mapping rules between the fields of the database and the ontology; (2) generation of the OWL data; and (3) importing the OWL data into the semantic data store. Most approaches are based on the mappings between the relational and semantic primitives of the corresponding models languages. Performing only a syntactical transformation, the meaning of the content is not really exploited. In this work we use the SWIT transformation approach, which preserves the meaning of the content based on the specification of mappings between the entities of the source relational schema and the entities of the target domain ontology. Semantic querying The amount of RDF data, and the development of applications that use semantic web technologies for storing, publishing and querying data has increased constantly in the last decade [41]. Semantic endpoints in which the users can exploit the data without any knowledge of SPARQL have been developed. For example, Natural Language Processing has been used to develop a question answering system [42]. In other works, the authors use parametrised queries to answer questions based on a template [43]. In faceted search over RDF repositories, the user can refine the filters over the results of each SPARQL query [41]. In the biomedical field, the use of semantic querying is limited to the generation of semantic searchers or dashboards. BioDash is an example of semantic dashboard that exploits heterogeneous data sources for drug discovery [44]. Chem2Bio2RDF provides dashboards automatically collecting associations within the systems' chemical biology space [45]. In this work, our goal is to go beyond the state of the art by allowing users to dynamically define their semantic dashboards. Ontology construction Best practices in ontology engineering recommend to reuse existing content and to create modular ontologies [46]. These recommendations are implemented reusing concepts from different ontologies so that the resulting ontology infrastructure is likely to be a networked ontology. The OBO Foundry has also developed a series of principles for ontology construction which propose principles for modularity, orthogonality and reusability [47]. The method for constructing the domain ontology used in this work consisted in identifying the main entities that should be represented, searching BioPortal for existing ontologies containing classes representing these entities, selecting the most appropriate ones (by our subjective criteria), and extending them when necessary. The final ontology has been implemented using Protégé4 in OWL-DL, which is the OWL subset based on Description Logics. Data generation and representation In this work, we have generated a simulated cancer registry dataset using the statistical distribution of a real registry dataset, following the method proposed in [48]. Data provided by the National Cancer Registry of Ireland5 were used to obtain a patient distribution by age. The cancer registry was accessed on 10-05-2016 and we included 533409 cases diagnosed from 1994 to 2013. The patients were generated in groups classified by gender and 5-years age ranges (0-4, 5-9, 10-14, etc.). The last group of patients contains people older than 85 years old. For each group of patients we have calculated the probability distribution of diagnosing a concrete type of cancer, and the probability distribution of receiving a particular therapy (surgery, chemotherapy, radiotherapy, hormonal therapy,...) for a concrete diagnosis. These probabilities were used to assign weights to every type of cancer with its therapies for each group of patients. For example, for patients between 60 and 64 years old, the probabilities for different types of cancer are breast cancer (0.23), lung cancer (0.17), prostate cancer (0.17), and colorectal cancer (0.08). For patients within this age range and diagnosed with colorectal cancer the probabilities of the therapies would then be: teletherapy (0.44), chemotherapy (0.44) and surgical treatment (0.12). Figure 1 shows the stack of distributions. When the random number is between 0.57 and 0.64 we assign colorectal cancer as the patient's diagnosis. Then, we generate a new random number to assign the first therapy and so on. Schema of probability distribution of diagnoses and therapies Furthermore, survival and mortality data were used for extracting the evolution of the disease. Finally, we ensured that the amount of patients with more than one cancer diagnosis meets the distribution of the real dataset. Our simulated dataset consists in randomised cases. For each case, we establish the gender and age of the patient. Then, we apply a partially random distribution algorithm for getting the patient characteristics. This algorithm uses the weights assigned to each type of cancer, therapy or course to generate distributions similar to the original database. This algorithm is able to generate patients with one or more diagnoses with various therapies and courses following the probability distribution previously calculated. Such data have been represented in RDF by applying SWIT, whose transformation method has three main steps: (1) definition of the mapping rules between the database schema and the ontology; (2) generation of the RDF data; and (3) importing the RDF data into the semantic data store. We use a semantic repository to store the data, which integrates two types of data sources: (1) an OWL files server with the formal representation of the domain, and (2) an RDF repository which stores the data. Virtuoso6 is used as data store [49]. Exploitation model Our approach includes a set of methods for exploiting the information model in the semantic repository. Ontology-driven search (ODS) SPARQL is the language used for querying the data store. We use our ontology-guided input text subsystem [50] to make it easier for clinicians to exploit the data warehouse. The main objective is to allow users to design and execute SPARQL queries without knowing SPARQL. This tool is an editor for SPARQL queries supported by an OWL ontology. The OWL ontology provides the classes and properties that can be used for creating the SPARQL query that will be executed on the RDF repository. The construction of the queries begins with the selection of a main class of the ontology. For example, if we wish to find patients, then the ODS begins with the selection of the ontology class Patient. The user can define filters over this class by using the data properties or object properties of the ontology. The use of owl:ObjectProperty permits to include other concepts in the query. For example, if we wish to find patients whose diagnosis is lung cancer, the user can select the owl:ObjectProperty hasDiagnosis, which is associated with the class Patient, which permits to use the owl:ObjectProperty Pathological structure of the class Diagnosis to select the class representing lung cancer. The ODS is able to generate SPARQL queries in which the subject is an ontology class, the predicate is a property and the object can be either a value or other concept. By selecting an owl:ObjectProperty, the user can add other properties of this concept to the query. This service follows the approach of template-based searches [43]. With this tool, the data store can be searched using the properties defined in the ontology. Moreover, it allows the generation of aggregated queries for the elaboration of representative charts of the data store. The generated queries can be stored for parameterisation and reuse. Aggregate functions such as count, average, min or max can be used. The results of these queries can be linked with other resources. The filters used can also be stored for later reuse. The semantic search engine not only allows for data retrieval but also for creating new classes in the semantic model, which can be assimilated to OWL defined classes. For example, the query for patients with colon cancer could be defining the class "Patient with colon cancer". The members of this class are obtained by executing the corresponding query. Semantic profiles Conceptually speaking, the semantic profile is defined as the set of relations and properties of an individual. Semantic profiles permit to identify groups of patients that share the same properties and are therefore useful for comparing and studying such groups. Ontologies are of special interest for creating profiles because they allow to select and aggregate individuals from a conceptual perspective. Our approach can also generate the semantic profile of a group of patients by applying one or more criteria. Hence, we define a semantic profile as the subset of semantic information of an individual that is interesting for a particular analysis. The profile of the individual i is calculated as shown in Eq. 1. $$ SP(i) = S(d) \cup S(SP(o)) $$ where S(d) represents a subset of the selected owl:datatypeProperty and S(SP(o)) represents a function that retrieves the individuals linked through owl:objectProperty axioms to i. The semantic profile is built by the application of the ODS by using the entities defined in a domain ontology. The ODS permits to select the properties of interest and to define the filtering and aggregation conditions. The user can define the SPARQL queries that will return the subset of properties and relationships that provides the best description of the individual for the specific case. This information is obtained for each individual, and the results can be viewed as a cache of the most important semantic information describing the individuals. Semantic profiles can be seen as a purpose-specific application of the semantic search engine. Two types of semantic profiles are of special relevance in the context of this work, namely, the timeline representation of a patient and the aggregated disease timeline representation of a patient group with some common properties. Both are described in the next sections. Disease timeline of a cancer patient The disease timeline of a patient contains information about various health-related events (e.g. diagnosis, patient conditions, therapies and the disease courses). Retrieving these events for a patient requires data normalisation for the representation of therapies by month. Figure 2 shows that every diagnosis has an associated timeline which includes therapies and the disease course, both ordered by month. For example, we can show the timeline for a breast cancer patient that includes the applied therapies (surgical treatment, chemotherapy, etc.) for every period. Furthermore, we can show the course of the disease and its relation with changes in therapies. It also includes the date of the diagnosis and the date of the last encounter. Finally, the profile contains all the patient's diagnoses and a list of her conditions. Schema of semantic profile of a cancer patient Aggregated disease timeline of a group of patients The aggregated timeline of a patient group (see Fig. 3) includes all the events of the selected patients who have the same selection criteria for a given period and for a concrete diagnosis. The groups of patients are defined using the ODS, which permits to define groups of patients with the same diagnosis, staging, grading and age range. This permits to obtain the semantic profile of each member of the group. Then, the semantic profiles of the members of the group are globally analysed, so obtaining a matrix that contains the disease courses of the included patients for every month of the disease. Using this method, the user is able to generate, for example, a group of patients with lung cancer with ages between 60 and 70 years old. In this case, our service could represent which therapies are applied in chronological order and which are the most likely courses. At the same time, these graphical representations can be used as new filters to recalculate the corresponding variables. For example, if the user selects to apply chemotherapy as first therapy, the representation changes to reflect the new scenario. Overview of the generation of aggregated disease timeline of a patient group Enrichment analysis is a type of statistical analysis that is frequently used in biomedical domains [51]. Our enrichment analysis method is based on the hypergeometric distribution method established for the GO:TermFinder to determine the significance of a Gene Ontology annotation to a list of genes [52], and the hypergeometric distribution was developed using Apache Commons Math7. This type of analysis is useful to compare several subsets of patients with the same diagnosis. We perform a statistical analysis of the ICD-10 codes to support the users in the definition of diagnosis-based groups. We calculate the P-value for each group as shown in Eq. 2. $$ P = 1 - \sum_{i=0}^{k-1}\frac{\binom{M}{i}\binom{N - M}{n - i}}{\binom{N}{i}} $$ where N is the total number of ICD10 codes used in the cancer registry, M is the number of diagnoses annotated with each ICD10 code, n is the number of ICD10 codes of interest for a concrete patient group and k is the number of ICD10 codes used for annotating each diagnosis. Semantic dashboard A semantic dashboard is a graphical representation of the results of one or more queries. Semantic dashboards are represented as 〈〈L, V〉, isDashboard, U〉 where 〈L, V〉 are the results of the SPARQL as key-value pairs 〈L, V〉, and U is who defined the dashboard. Each user can define and customise her dashboards. The semantic dashboard is implemented using the ODS and permits to create aggregated data. The results can be represented graphically and in tabular format. Based on the persistence model of SPARQL queries, the representations can be used for accessing the data instances contained in each representation. Consequently, aggregation control boxes can be regarded as search filters of the semantic search engine. Figure 4 shows the query generated with the ODS for searching patients over 70 years old and classified by cancer type. In the left side we show the graphical representation and in the right side the data in tabular format. Example of semantic dashboard The semantic dashboards can also include multiple aggregated queries and display comparative graphics. Finally, dashboards can also be persisted, parameterised by users and reused. We have developed an algorithm based on Bayesian networks to suggest the most appropriate treatment for a patient. This algorithm is based on the generation of probabilistic models using semantic nodes profiles. Bayes networks cannot have cycles [53], but our semantic dataset might contain cycles. The semantic profiles might have cycles due to, e.g., the repetitive application of a given treatment to the patient. To solve this problem a tree network is generated for each profile. In case of being interested in knowing which treatment is likely to be the most appropriate for a patient given a number of features, the model would first retrieve all the patients with such features, and then use their semantic profile to generate the map of Bayesian networks with the possible treatments by period (month, term, etc.). Once a treatment is selected, the network is re-calculated to improve the next recommendation. Given this dynamic aspect of the network, the method requires that the user indicates which characteristics might generate a cycle in the network to prevent the algorithm from falling in an infinite loop. The approach described in the previous section has been applied in a scenario that simulates an institutional cancer registry. An ontology modeling the semantics of an institutional cancer registry has been developed. This ontology has driven the transformation of the simulated dataset into RDF and its storage in the semantic data store. We have implemented a Semantic Web platform that permits users to exploit the cancer registry dataset by formulating incremental, customisable queries using a graphical user interface based on the ODS and by generating dashboards on demand. The complex timelines of the disease of individual and aggregated patients can also be explored and analysed. Next, more details about these results are provided. The ontology We have built a preliminary cancer registry ontology8 based on the existing ontologies and fulfilling the requirements of a local cancer registry. This first draft ontology represents some aspects of cancer diseases and their treatment pragmatically. The ontology reuses the Semanticscience Integrated Ontology (SIO) [54] and the Ontology for Biomedical Investigations (OBI) [55]. The ontology incorporates concepts from clinical standards used in cancer such as ICD10, ICD-O-3, TNM staging, Karnofsky index [56] and ASA index [57]. The ontology has been defined in OWL-DL. The metrics of the ontology are as follows (numbers in brackets represent the number of entities added by our work). The ontology contains a total of 20,551 classes (335), 28 properties (18) and 342 object properties (29), with 152,529 logical axioms (2581). The ontology defines the following classes: Patient represents a person with any type of cancer disease. Properties: gender, birth date, diagnosis, therapies and disease courses. This class is equivalent to the class Patient in SIO. Patient condition represents the health condition of a patient at a given time. Properties: reference date, age, weight, height, Karnofsky index, ASA index and the menopause status. Diagnosis represents the patient diagnosis at a given time. Properties: ICD10 code, grading, staging, therapies, date, pathological structure, anatomical structure and tumor type. This class is equivalent to the class Diagnosis in SIO. Therapy represents the patient therapies of a diagnosis at a given time. Different kinds of therapy such as Chemotherapy, Surgical Treatment, Nuclear Medicine and others have been modeled in the ontology as subclasses of Therapy. Properties: medication, start date and end date. Disease course represents the development in time (process) of a tumor disease of a certain type (diagnosis) over a time interval at a given time point. Different kinds of course such as Complete remission (tumor is not detectable any longer), Progression (tumor mass increases to a certain amount), Recurrence (after complete or partial remission, tumor mass increases again), and others have been modeled in the ontology as subclasses of Disease course. Properties of disease course are diagnosis, patient conditions, stage, order and date. The properties date and order are the key to sort the courses of the patient for a concrete diagnosis. This class is equivalent to the class Disease course in OBI. The ontology also includes some classes to represent the TNM classification system of malignant tumors. They include anatomical entities for cancer grading and staging, e.g. Primary tumor, Regional Lymph Nodes and Distant Metastasis hierarchies. Health Classification System is the superclass of all classes representing coding artifacts of health related classification systems. To build the taxonomies of classifications for a cancer registry, we tried to reuse other ontologies. For the ICD10 code we use the ontology built in [58]. We have evaluated the quality of our ontology using the Ontology Quality Evaluation Framework (OQuaRE) [59]. OQuaRE is a framework for evaluating the quality of ontologies based on standards of software quality. OQuaRE automatically calculates quality scores in the range [1,5] for a series of characteristics and subcharacteristics. A score 1 indicates that it does not fulfill the minimal requirements, 3 indicates that the ontology meets the requirements, and 5 indicates that the ontology exceeds the requirements. Table 1 shows the results for our ontology. The scores for Functional Adequacy, Maintainability, Operability, Structural and Transferability are over 4. The lowest results are achieved for Compatibility and Reliability, although they are over 3. The results show that our ontology has a high level of cohesion, consistency, formalisation, modularity and reusability, which are the most relevant aspects for the present work. Table 1 OQuaRE Metrics for the Cancer Registry ontology The semantic cancer registry system We have implemented a prototype system9 based on the methods described in previous sections. Figure 5 shows the three main parts of this system. All the components of our system have been developed from scratch except SWIT, which is a previous result of our research group. The upper part of the figure shows the data transformation module, which uses SWIT for transforming the original data in semantic information stored into the semantic data store. Overview of the system The cancer registry ontology is the core of the system, allowing for the computational management of the information related to the cancer patients. All the services offered by the prototype are implemented on top of this core. The data transformation requires to map the source data schema to the cancer registry ontology. The lower part of the figure shows the other two modules of the system. The right one shows the module for the analysis of individual patients, that is, extraction of semantic profile and timeline analysis. The left one shows the module for the analysis of groups of patients, which also includes the graphical access to the disease courses of those groups. The ODS permits to create groups of patients that share some semantic properties. This permits to generate charts and tables with accumulated data of the semantic repository. In this case, the system provides an option for adding the grouping class or property, so that it can be considered as a customisable dashboard designer. The dashboard permits users to select and aggregate the information on every class of the semantic model. This module is the base for the construction of other services such as the graphical representation of the aggregated timelines of a group of patients or the customisable dashboards. The dashboard visualises the concepts of the model in charted and grouped forms, and multiple, on-demand, incremental dashboards can be built. For instance, a user can generate a pie chart selecting patients by their first therapy. The user can save any dashboard for querying the results without needing to generate it again. Application to the simulated dataset We have performed an initial evaluation of the system. We have generated a simulated database with 207.190 patients10. By the application of SWIT, the generated dataset meets the constraints defined in our ontology, whose entities are used for creating the RDF dataset. The time for the transformation of the dataset from the relational database to the semantic datastore has been thirty-two minutes (Main features of the server: Intel Ⓡ CoreTM i7-3770T Processor (8M Cache, up to 3.70 GHz), 8GB RAM, SATA2). We have carried out some tests based on the execution of different types of queries to compare the performance of the relational and semantic stores. Table 2 11 shows that the time performance of the semantic datastore is slower than the relational one for basic queries that do not require joins. However, the semantic datastore performs better than the relational model, even with indexes, on this dataset for more complex queries. The semantic datastore is also faster when filtering by a single property of the class or the table column. Table 2 Results of the migration of the relational database to the semantic data store This tool permits users to formulate incremental, user-defined queries with a graphical user interface based on the ODS. Figure 6 shows a comparative graphic over the therapies applied to patients diagnosed with colorectal cancer in different age ranges. Table 3 shows the generated query for this case. The query results can be displayed in several customisable ways, allowing for the generation of on-demand dashboards. Dashboard view Table 3 SPARQL query generated by ODS for a dashboard Graphical representation of the disease timeline of a patient This service permits users to observe the main properties of the timeline of a patient with a cancer disease. Figure 7 shows an excerpt of the therapy and course timeline of a patient with pharynx cancer. In this view, users can see the details of the diagnosis and of every therapy applied in each period. Besides, users are provided with two evolution charts, which are based on the patient course and on the Karnofsky index. Excerpt of the timeline representation Graphical representation of the aggregated disease timeline of a patient group Figure 8 shows the selection and the aggregation of patients using the following criteria: male patients aged between 50 and 70, diagnosed with colorectal cancer, and who have received Chemotherapy. Table 4 shows the query generated for this case. Ontology-driven searcher view Table 4 SPARQL query generated by ODS for a filter After the selection and the aggregation of patients, the system generates charts that contain the therapies and the disease courses of the patients. This service can be employed as an exploratory therapy simulator. Optionally, the entire time matrix can be recalculated by selecting a certain therapy. This can help the user to estimate which therapy is likely to be the most appropriate. Figure 9 shows an excerpt of the panel for analysing the first two months of the therapies of a group of 60 patients. Excerpt of the aggregated disease timeline of a patient group The enrichment analysis Term enrichment was performed on several patient groups using the hypergeometric distribution method for the ICD10 code annotations on each diagnosis. First, we used a sample of cancer cases related to over 300 patients. Our design requirement for this sample was to include patients of both genders, so we discarded breast and prostate cancer for this analysis. The sample contained three main cohorts: diagnosis of lung cancer (469), diagnosis of melanoma (338) and diagnosis of colorectal cancer (311). Table 5 shows the results associated with lung cancer for males and females. The results show that the difference between both groups is not significant for lung cancer but, as shown in Table 6, it is significant for colorectal cancer. For example, Malignant neoplasm of rectum is clearly over-represented in the gender male, which permits to conclude that this diagnosis is much more common in men. Table 5 Term enrichment for ICD10 cores of Lung cancer Table 6 Term enrichment for ICD10 cores of colorectal cancer The target users of our platform are described next: Physicians can use our platform to extract knowledge from the cancer registry in aggregated form filtering on the risk of patients by applying clinical criteria. Furthermore, they can obtain a graphical representation of the disease course of a concrete patient or a group of patients. Health managers can use our platform to generate customisable dashboards to prepare a follow-up of the clinical services involved in the diagnosis or therapies for cancer. Tumor documentaries can use the platform to detect cases with incomplete or inconsistent documentation for data curation. Cancer registries have become a basic tool for disease research and treatment. Nowadays, there are several technological solutions able to manage and analyse the information of patients with a determined diagnosis. However, the lack of formal semantic models is a problem when personalised analyses or external data links are required. In this paper we have presented a semantic platform for the analysis and visualisation of records in an institutional cancer registry. Based on the analysis of requirements, we have developed an ontology that models the semantics of a regional cancer registry. We have used this model and SWIT for transforming and storing simulated data from a cancer registry in a semantic data store. Our approach permits users to formulate incremental, user-defined queries with a graphical user interface based on the ODS. The results of the queries can be displayed in several customisable ways, allowing for the generation of on-demand dashboards. The complex timelines of the disease of individuals and aggregated patients can be clearly represented. Rule-based systems and logic-based models have been semantic approaches applied to cancer registries, such as analysis of cancer registry processes [7], quality assurance [8] and decision support [9]. Our approach innovates by combining traditional technologies such as relational databases and semantic web technologies. We have created an OWL ontology for representing some aspects of an institutional, local cancer registry. We have developed an RDF repository whose structure is driven by the OWL ontology and permits to work by exploiting the semantics of the content. In this way retrieval is semantically enabled, so that queries are independent of the relational data structures of conventional databases. Our technological infrastructure has permitted us to develop a semantic searcher for navigating through the complete cancer registry, to extract semantic profiles of the patients, and to analyse the structure of disease courses. Our approach provides powerful and precise search capabilities assisted by a customisable dashboard adaptable to the requirements of each user. This proposal is very similar to the tools presented in [43], but we innovate by permitting users to generate re-usable templates. Furthermore, the templates do not only allow the generation of search forms but also of parameterised user-customisable dashboards. The platform permits to use the entities defined in the OWL ontology for creating the queries in a more intuitive way than using a traditional relational model. Furthermore, the use of a NoSQL database (e.g. RDF repository) allows to use a robust and scalable architecture for large clinical data warehouses [49]. Another important advantage of using semantic knowledge modelling is the possibility of sharing information and comparing clinical cases and processes. The semantic profiles enable the generation of timelines for different patient records. Our approach combines multimedia and temporal visualisations [24] which can be customised by the users. The semantic profiles can be aggregated, hence enabling the generation of timelines of a patient group with similar characteristics. This visualisation can be used as a graphical representation of a Bayesian network. Clinicians can interact with the visualisation to discover likely courses of patients' diseases. The platform offers data analysis based on term enrichment to support clinicians to generate groups of patients. One limitation of this work is the application of a preliminary version of an ontology of cancer registry data. This ontology needs to be reviewed and extended. However, we believe that the OQuaRE quality scores of the ontology permit to use it for proof-of-concept implementations and experiments such as the one presented in this work. Another limitation is the use of simulated data, because real data would enable a more reliable (1) validation of the correctness and completeness of the system, (2) testing of the performance of the system, and (3) evaluation of the impact of missing data in the performance [8]. In this work, we have been able to evaluate only some components of the platform. A complete evaluation would mean to measure the following metrics: efficiency, usability, usefulness of the graphical representations of the analysis of the disease courses or patients or group or patients and the capacity to develop new customizable dashboards by the users. The results of this work were shown in a clinical session of the epidemiological service of our largest regional hospital, the Virgen de la Arrixaca Hospital in Murcia, Spain. The physicians showed their interest in applying the same methodology to the Colorectal Cancer Prevention Program of the Region of Murcia (Spain). This use case will include real data from 322,869 patients recruited since 2006. Nowadays, the physicians can generate customisable dashboards12 and they are interested in a prediction of their future level of risk of patients. In addition, a study combining real data from the Department of Epidemiology of Murcia Regional Health Council (Spain) and the cancer registry of the Comprehensive Cancer Center Freiburg (Germany) is planned. On the clinical side, this would permit to perform studies with data originating in different registries as well as to perform comparative studies on the characteristics and evolution of cancer patients in different populations or on clinical oncology practice in these regions. On the technical side, this would permit to exploit the fact that ontology-based approaches facilitate data integration. Although data integration has not been investigated in this work, we believe that sharing the same ontology for different registries would enable interoperability, and the data could be jointly exploited by means of distributed SPARQL queries. By the same means, they could also be used to create an integrated data warehouse. The decision between both implementation options depends on the requirements of the use case, is due to the time cost of executing the distributed queries and the effort needed to maintain the data warehouse. However, this effort does not imply major changes in the RDF data representation. Such a study could also test how the ontology copes with different registries, which we believe it is a relevant quality indicator for our ontology. We plan to extend the platform with studies of other chronic pathologies, which might also include a clinical validation. In this way, we plan to apply the platform for monitoring clinical trials thanks to the flexibility of the ODS and the customisable dashboards. Furthermore, we plan to use D3SPARQL [60] to enrich the dashboard plots. Finally, we would like to use this model to generate rules that serve to automatically generate patient groups or for quality assurance of the data. This work has demonstrated that ontologies and the RDF repositories can be effectively combined for exploiting a local cancer registry. On the one hand, we constructed an ontology that models the knowledge of local cancer registry. On the other hand, we have used semantic web technologies for building a platform to analyse the complex timelines of a patients with cancer. Besides, our semantic structure has allowed for representing the aggregated disease timelines of patient groups. The semantic infrastructure has also permitted the generation of graphical representations of the stored knowledge in the cancer registry with the generation of customisable dashboards. The work is an example of how ontologies can guide the entire life cycle of a analysis platform: data transformation, exploitation and knowledge generation. These technologies allow users to configure advanced searches, build custom dashboards and establish complex analysis from semantic profiles. Furthermore, semantic technologies establishes the bases to link to external data sources and comparative analysis with other organizations. We believe that this work provides new insights about how semantic technologies can be applied to the exploitation of clinical data in general, and to clinical registries in particular. 1 http://www.elekta.com/healthcare-professionals/products/elekta-software/cancer-registry.html 2 http://www.oncolog.com/?cid=7 3 http://www.askcnet.org/ 4 http://protege.stanford.edu/ 5 http://www.ncri.ie/ 6 http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/http://virtuoso.openlinksw.com/dataspace/doc/dav/ http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/wiki/Main/ 7 http://commons.apache.org/proper/commons-math/ 8 http://sele.inf.um.es/ontologies/cancer-registry2.owl 9 http://sele.inf.um.es/SECARE/ 10 http://sele.inf.um.es/ontologies/individuals.zip 11 The test has been carried out in a local machine with MySQL 5 as relational database and Virtuoso 7 as RDF repository. 12 http://sele.inf.um.es/SECOLON/ Description logics EMR: Electronic medical record ICD: International classification of diseases OBO: Open biomedical ontologies ODS: Ontology-driven search OQuaRE: Ontology quality evaluation framework OWL: Web ontology language PCS: Procedure coding system Resource description framework SNOMED CT: Systematic nomenclature of medicine - clinical terms SPARQL: SPARQL protocol and RDF query language SWIT: Semantic web integration tool TNM: Classification of malignant tumours RDFS: Resource description framework schema Muir CS, Nectoux J. Role of the cancer registry. Natl Cancer Inst Monogr. 1977; 47:3–6. Jensen O, Whelan S, Jensen O, Parkin D, MacLennan R, Muir C, Skeet R. Planning a cancer registry. Cancer Registration: Principles and Methods. Lyon: International Agency for Research on Cancer; 1991, pp. 22–28. Altmann U, Katz FR, Tafazzoli AG, Haeberlin V, Dudeck J. GTDSŰa tool for tumor registries to support shared patient care. In: Proceedings of the AMIA Annual Fall Symposium: 1996. Parkin DM. The evolution of the population-based cancer registry. Nat Rev Cancer. 2006; 6(8):603–12. De Angelis R, Francisci S, Baili P, Marchesi F, Roazzi P, Belot A, Crocetti E, Pury P, Knijn A, Coleman M, et al.The EUROCARE-4 database on cancer survival in Europe: data standardisation, quality control and methods of statistical analysis. Eur J Cancer. 2009; 45(6):909–30. Madigan D, Ryan PB, Schuemie M, Stang PE, Overhage JM, Hartzema AG, Suchard MA, DuMouchel W, Berlin JA. Evaluating the impact of database heterogeneity on observational study results. Am J Epidemiol. 2013; 178(4):645–651. Shiki N, Ohno Y, Fujii A, Murata T, Matsumura Y. Unified Modeling Language (UML) for hospital-based cancer registration processes. Asian Pac J Cancer Prev APJCP. 2008; 9(4):789–96. Caldarella A, Amunni G, Angiolini C, Crocetti E, Di Costanzo F, Di Leo A, Giusti F, Pegna AL, Mantellini P, Luzzatto L, et al.Feasibility of evaluating quality cancer care using registry data and electronic health records: a population-based study. Int J Qual Health Care. 2012; 24(4):411–8. Tafazzoli AG, Altmann U, WÃ W. Integrated knowledge-based functions in a hospital cancer registry-specific requirements for routine applicability. In: Proceedings of the AMIA Symposium: 1999. Berners-Lee T, Hendler J, Lassila O. The Semantic Web. Sci Am. 2001; 284(5):34–43. Studer R, Benjamins VR, Fensel D. Knowledge engineering: principles and methods. Data Knowl Eng. 1998; 25(1):161–97. Kalra D, Lewalle P, Rector A, Rodrigues J, Stroetman K, Surjan G, Ustun B, Virtanen M, Zanstra P. Semantic interoperability for better health and safer healthcare. In: Semantic HEALTH Report. Luxembourg: European Commission: 2009. SemanticHealthNet. http://www.semantichealthnet.eu/. Accessed 22 Sept 2017. Brochhausen M, Spear AD, Cocos C, Weiler G, Martín L, Anguita A, Stenzhorn H, Daskalaki E, Schera F, Schwarz U, et al.The ACGT Master Ontology and its applications–Towards an ontology-driven cancer research and management system. J Biomed Inform. 2011; 44(1):8–25. Min H, Manion FJ, Goralczyk E, Wong YN, Ross E, Beck JR. Integration of prostate cancer clinical data using an ontology. J Biomed Inform. 2009; 42(6):1035–45. Abidi SR. Ontology-based modeling of breast cancer follow-up clinical practice guideline for providing clinical decision support. In: Computer-Based Medical Systems, 2007. CBMS'07. Twentieth IEEE International Symposium On. IEEE: 2007. p. 542–547. World Health Organization (WHO). International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10). http://apps.who.int/classifications/icd10/browse. Accessed 22 Sept 2017. International Health Terminology Standards Development Organisation. Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). http://www.ihtsdo.org/snomed-ct. Accessed 22 Sept 2017. Sobin LH, Gospodarowicz MK, Wittekind C, editors. TNM classification of malignant tumours: Wiley; 2011. Centers for Medicare and Medicaid Services (CMS) and the National Center for Health Statistics (NCHS). The 2014 ICD-10-Procedure Coding System (ICD-10-PCS). http://www.cms.gov/Medicare/Coding/ICD10/2014-ICD-10-PCS.html. Accessed 22 Sept 2017. Shortliffe EH. The evolution of electronic medical records. Acad Med. 1999; 74(4):414–9. Jha AK, DesRoches CM, Campbell EG, Donelan K, Rao SR, Ferris TG, Shields A, Rosenbaum S, Blumenthal D. Use of electronic health records in US hospitals. N Engl J Med. 2009; 360(16):1628–38. West VL, Borland D, Hammond WE. Innovative information visualization of electronic health record data: a systematic review. J Am Med Inform Assoc. 2015; 22(2):330–9. Bui AA, Aberle DR, Kangarloo H. TimeLine: visualizing integrated patient records. IEEE Trans Inf Technol Biomed. 2007; 11(4):462–73. Ratib O. From multimodality digital imaging to multimedia patient record. Comput Med Imaging Graph. 1994; 18(2):59–65. Forslund DW, Phillips RL, Kilman DG, Cook JL. Experiences with a distributed virtual patient record system. In: Proceedings of the AMIA Annual Fall Symposium. American Medical Informatics Association: 1996. p. 483. Cousins SB, Kahn MG. The visual display of temporal information. Artif Intell Med. 1991; 3(6):341–57. Powsner SM, Tufte ER. Graphical summary of patient status. The Lancet. 1994; 344(8919):386–9. Nygren E, Henriksson P. Reading the medical record. I, Analysis of physician's ways of reading the medical record. Comput Methods Prog Biomed. 1992; 39(1):1–12. Nygren E, Johnson M, Henriksson P. Reading the medical record. II, Design of a human-computer interface for basic reading of computerized medical records. Comput Methods Prog Biomed. 1992; 39(1):13–25. Bizer C, Seaborne A. D2RQ-treating non-RDF databases as virtual RDF graphs. In: Proceedings of the 3rd International Semantic Web Conference (ISWC2004), vol. 2004. Springer: 2004. Berners-Lee T. Linked Data - Design Issues. http://www.w3.org/DesignIssues/LinkedData.html. Accessed 21 Jan 2015. Auer S, Dietzold S, Lehmann J, Hellmann S, Aumueller D. Triplify: Light-weight Linked Data Publication from Relational Databases. In: Proceedings of the 18th International Conference on World Wide Web. WWW '09. New York: ACM: 2009. p. 621–30. Erling O, Mikhailov I. RDF Support in the Virtuoso DBMS. In: Networked Knowledge-Networked Media. Springer, Berlin, Heidelberg: 2009. p. 7–24. OpenLink. Virtuoso Open-Source: Mapping Relational Data to RDF in Virtuoso. http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VOSSQLRDF. Accessed 22 Sept 2017. Tsinaraki C, Christodoulakis S. XS2OWL: a formal model and a system for enabling XML schema applications to interoperate with OWL-DL domain knowledge and semantic web tools. Digital Libraries: Research and Development; 2007, pp. 124–136. Bumans G, Cerans K. RDB2owl: A Practical Approach for Transforming RDB Data into RDF/OWL. In: Proceedings of the 6th International Conference on Semantic Systems. I-SEMANTICS '10. New York: ACM: 2010. p. 25–1253. Knoblock CA, Szekely P, Ambite JL, Goel A, Gupta S, Lerman K, Muslea M, Taheriyan M, Mallick P. Semi-automatically Mapping Structured Sources into the Semantic Web. In: Extended Semantic Web Conference. Springer, Berlin, Heidelberg: 2012. p. 375–390. Jupp S, Horridge M, Iannone L, Klein J, Owen S, Schanstra J, Wolstencroft K, Stevens R. Populous: a tool for building OWL ontologies from templates. BMC Bioinforma. 2012; 13(Suppl 1):5. Legaz-García MC, Miñarro-Giménez JA, Menárguez-Tortosa M, Fernández-Breis JT. Generation of open biomedical datasets through ontology-driven transformation and integration processes. J Biomed Semant. 2016; 7:32. Arenas M, Cuenca Grau B, Kharlamov E, Marciuska S, Zheleznyakov D. Faceted search over ontology-enhanced RDF data. In: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM: 2014. p. 939–948. Kaufmann E, Bernstein A, Fischer L. NLP-reduce: A naive but domainindependent natural language interface for querying ontologies. In: 4th European Semantic Web Conference ESWC: 2007. p. 1–2. Unger C, Bühmann L, Lehmann J, Ngonga Ngomo AC, Gerber D, Cimiano P. Templatebased question answering over RDF data. In: Proceedings of the 21st international conference on World Wide Web. ACM: 2012. p. 639–648. Neumann EK, Quan D. BioDash: a Semantic Web dashboard for drug development. In: Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing: 2006. p. 176–187. Dong X, Ding Y, Wang H, Chen B, Wild D. Chem2Bio2RDF dashboard: Ranking semantic associations in systems chemical biology space. Future of the Web in Collaboratice Science (FWCS), WWW; 2010. Rector A, Brandt S, Drummond N, Horridge M, Pulestin C, Stevens R. Engineering use cases for modular development of ontologies in OWL. Appl Ontol. 2012; 7(2):113–32. Smith B, Ashburner M, Rosse C, Bard J, Bug W, Ceusters W, Goldberg LJ, Eilbeck K, Ireland A, Mungall CJ, Leontis N, Rocca-Serra P, Ruttenberg A, Sansone SA, Scheuermann RH, Shah N, Whetzel PL, Lewis S. The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat Biotechnol. 2007; 25(11):1251–5. Schmidtmann I. Estimating Completeness in Cancer Registries–Comparing Capture-Recapture Methods in a Simulation Study. Biom J. 2008; 50(6):1077–92. Article MathSciNet Google Scholar Rea S, Pathak J, Savova G, Oniki TA, Westberg L, Beebe CE, Tao C, Parker CG, Haug PJ, Huff SM, Chute CG. Building a robust, scalable and standards-driven infrastructure for secondary use of EHR data: the SHARPn project. J Biomed Inform. 2012; 45(4):763–1. Esteban-Gil A, Garcia-Sanchez F, Valencia-Garcia R, Fernandez-Breis JT. SocialBROKER: A collaborative social space for gathering semantically-enhanced financial information. Expert Syst Appl. 2012; 39(10):9715–22. Jensen M, Cox AP, Ray P, Teter BE, Weinstock-Guttman B, Ruttenberg A, Diehl AD. An Ontological Representation and Analysis of Patient-reported and Clinical Outcomes for Multiple Sclerosis. In: Proceedings of the International Conference on Biomedical Ontology: 2014. p. 52–55. Boyle EI, Weng S, Gollub J, Jin H, Botstein D, Cherry JM, Sherlock G. GO:: TermFinder—open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes. Bioinformatics. 2004; 20(18):3710–5. Heckerman D, Geiger D, Chickering DM. Learning Bayesian networks: The combination of knowledge and statistical data. Mach Learn. 1995; 20(3):197–243. MATH Google Scholar Dumontier M, Baker CJ, Baran J, Callahan A, Chepelev L, Cruz-Toledo J, Del Rio NR, Duck G, Furlong LI, Keath N, et al.The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery. J Biomed Semant. 2014; 5(1):14. Accessed 05 Mar 2017. Bandrowski A, Brinkman R, Brochhausen M, Brush MH, Bug B, Chibucos MC, Clancy K, Courtot M, Derom D, Dumontier M, et al.The ontology for biomedical investigations. PLoS ONE. 2016; 11(4):0154556. Accessed 05 Mar 2017. Yates JW, Chalmer B, McKegney FP, et al. Evaluation of patients with advanced cancer using the Karnofsky performance status. Cancer. 1980; 45(8):2220–4. American Society of Anesthesiologists. ASA Physical Status Classification System. https://www.asahq.org/clinical/physicalstatus.htm. Accessed 22 Sept 2017. Cardillo E, Tamilin A, Eccher C, Serafini L. ICD-10 Ontology. https://dkm.fbk.eu/technologies/icd-10-ontology. Accessed 22 Sept 2017. Duque-Ramos A, Fernández-Breis JT, Stevens R, Aussenac-Gilles N, et al.OQuaRE: A SQuaRE-based approach for evaluating the quality of ontologies. J Res Pract Inf Technol. 2011; 43(2):159. Katayama T. D3SPARQL: Javascript library for visualization of SPARQL results. In: Proceedings of Semantic Web Applications and Tools for Health Care and Life Sciences: 2014. This project has been possible thanks to the Spanish Ministry of Economy, Industry and Competitiveness and the FEDER Programme through grants TIN2014-53749-C2-2-R, and by the Fundación Séneca (15295/PI/10, 19371/PI/14). The Cancer Registry ontology is freely available at http://sele.inf.um.es/ontologies/cancer-registry.owl. The semantic Web Platform for the analysis and visualisation of a cancer registry is available with the use case data at http://sele.inf.um.es/SECARE/. The user and password to sign in is änonymous.̈ The RDF dataset is available at http://sele.inf.um.es/SECARE/individuals.zip. Fundación para la Formación e Investigación Sanitarias de la Región de Murcia, Biomedical Informatics & Bioinformatics Platform, IMIB-Arrixaca, C/ Luis Fontes Pagán, n° 9, Murcia, 30003, Spain Angel Esteban-Gil Dpto. Informática y Sistemas, Facultad de Informática, Universidad de Murcia, IMIB-Arrixaca, Facultad de Informática, Campus de Espinardo, Murcia, 30100, Spain Jesualdo Tomás Fernández-Breis Institute for Medical Biometry and Statistics, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Stefan-Meier-Str. 26, Freiburg, 79104, Germany Martin Boeker Correspondence to Jesualdo Tomás Fernández-Breis. Conceived and designed the approach: AEG, JTFB, MB. Implemented the approach and performed the experiments: AEG, JTFB, MB. Analysed the results: AEG, JTFB, MB. Contributed to the writing of the manuscript: AEG, JTFB, MB. All the authors have approved the final manuscript. Esteban-Gil, A., Fernández-Breis, J. & Boeker, M. Analysis and visualization of disease courses in a semantically-enabled cancer registry. J Biomed Semant 8, 46 (2017). https://doi.org/10.1186/s13326-017-0154-9
CommonCrawl
Mathematical headaches? Problem solved! Hi, I'm Colin, and I'm here to help you make sense of maths Mathematical quotes Maths For Dummies Basic Maths For Dummies Basic Maths Practice Problems For Dummies Numeracy Tests For Dummies 20 Questions on… Core 1 Maths Core 2: Logs Core 3: Trigonometry Core 4: Integration Glorious Resolution Of Forces In Equilibrium n Mathematical Quotes (where n ~ 100) The Little Algebra Book Using Units to Deal With Density Written by Colin+ in ninja maths, pirate maths. Glancing over sample papers for the new GCSE, I stumbled on this: Zahra mixes 150g of metal A and 150g of metal B to make 300g of an alloy. Metal A has a density of $19.3 \unit{g/cm^3}$. Metal B has a density of $8.9 \unit{g/cm^3}$. Work out the density of the alloy. I don't think I'm being mean to say that this would stump the majority of students. It's probably designed to. Ahoy there, Mathematical Pirate! "Aharr! I spies a compound unit!" "Are you going to talk in that ridiculous manner for the whole blog post? I happen know you're from Windsor." "The rough end of Windsor." "Fine. But let's talk normally so as not to distract the readers." "Fine. You see that unit there, the $\unit{g/cm^3}$?" "That's a dead giveaway that you can use a formula triangle." "Teachers don't like those, do they?" "Many don't. I'm in the 'anything that works is OK at this stage' camp. Say, you notice how when I talk normally, it's really hard to keep track of who's saying what?" "That's only because I don't pepper the conversation with 'said the Pirate.'" "Arr. Now, what do you measure in grams? It's mass. That's on the top of the unit, so it goes on the top of the triangle. What's in centimetres cubed? That's volume, so it goes on the bottom. You've got one space left, so you may as well put the thing you're measuring, density, in there." "So $M$ on top, and $V$ and $D$ on the bottom row, in either order." "In either order. So to find out the volume of 150g of each of the metals, we work out $150 \div 19.3$ and $150 \div 8.9$," peppered the Pirate. "If the Ninja's not around, I'll work those out on the calculator: 7.772 and 16.854 centimetres cubed, respectively." "So the volume of the alloy is those added together..." "... $24.626\unit{cm^3}$ ..." "... and the density (referring back to the triangle) is 300 divided by that number you just said..." "... $12.18\unit{g/cm^3}$." "Easy as pie-racy, arr. Ut-oh!" Enter the Ninja "Er... hello, sensei, have you done something different with your bandanna?" "Did someone mention a formula triangle?" "Did they? I think you must have misheard." "Shenanigans, I say, shenanigans!" "Strong language, sensei!" "Deservedly so. In any case, it is simple algebra: when the masses are the same, you just need the harmonic mean: $\br{\frac{d_1^{-1}+d_2^{-1}}{2}}^{-1}$." "... obviously." "Or better, $\frac{2d_1 d_2}{d_1+d_2}$." "I'll concede that that is quite pretty." "Not with these numbers. But $19.3 \times 8.9$ is 180 less about 5%, and their sum is 28.2. Let's call it $360 \div 28$..." "... some say that 360 is a nice number because it has lots of factors, sensei..." "Some are fools. $360 \div 28$ is $90 \div 7$..." "A shade less than 13..." "12.857142 recurring, if you want to get snotty about it, but we don't have that kind of accuracy here. We need to lose about 5% from the top and and extra 1% from the bottom, so I'd reckon taking off 0.6 or 0.7 from that would be ok." "Making 12.2, give or take." The Mathematical Ninja nodded. Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. The Mathematical Pirate and The Quotient Rule How the Mathematical Pirate works out the high times tables How the Mathematical Ninja explains the Mathematical Pirate's circle trick How the Mathematical Ninja approximates $\sin(55º)$ The Mathematical Pirate and the Formula Triangle Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby. Is there a way to tell Goodreads "I have no interest AT ALL in the topic of that book, please stop recommending similar?" Ask Uncle Colin: Dividing by halves www.flyingcoloursmaths.co.uk/a… "Aargh, my boarding pass says 'Terence' instead of 'Thomas' and I'm worried they'll kick me off the plane!" said Tom, terrified. #swifty "My brother is called something like Arlington, son", said Tom, unclearly. #swifty On inverse-trig integration www.flyingcoloursmaths.co.uk/o… Wordpress theme © allure 2011. All rights reserved.
CommonCrawl
Study on the classification of capsule endoscopy images Xiaodong Ji1,2, Tingting Xu1, Wenhua Li1,3 & Liyuan Liang1 EURASIP Journal on Image and Video Processing volume 2019, Article number: 55 (2019) Cite this article Wireless capsule endoscope allows painless endoscopic imaging of the gastrointestinal track of humans. However, the whole procedure will generate a large number of capsule endoscopy images (CEIs) for reading and recognizing. In order to save the time and energy of physicians, computer-aided analysis methods are imperatively needed. Due to the influence of air bubble, illumination, and shooting angle, however, it is difficult to classify CEIs into healthy and diseased categories correctly for a conventional classification method. To this end, in the paper, a new feature extraction method is proposed based on color histogram, wavelet transform, and co-occurrence matrix. First, an improved color histogram is calculated in the HSV (hue, saturation, value) space. Meanwhile, by using the wavelet transform, the low-frequency parts of the CEIs are filtered out, and then, the characteristic values of the reconstructed CEIs' co-occurrence matrix are calculated. Next, by employing the proposed feature extraction method and the BPNN (back propagation neural network), a novel computer-aided classification algorithm is developed, where the feature values of color histogram and co-occurrence matrix are normalized as the inputs of the BPNN for training and classification. Experimental results show that the accuracy of the proposed algorithm is up to 99.12% which is much better than the compared conventional methods. In 2001, the world's first wireless capsule endoscopy system was approved by the US Food and Drug Administration for use in clinical practice [1], which allows painless endoscopic imaging of the gastrointestinal track of humans. During the inspection process, however, a large number of capsule endoscopy images (CEIs) will be produced. The CEIs used in the paper are provided by the Hangzhou Hitron Technologies Co., Ltd., whose independently developed HT-type wireless capsule endoscope system presents a shooting frequency of 2 frames per second. Therefore, 57,600 CEIs will be generated after 8 h of work. It will cost much time and energy if these CEIs are read and recognized by physicians. Thus, it is imperative to develop an efficient computer-aided analysis method being able to automatically classify CEIs with high correctness. Generally, the features of a CEI include shape, color, and texture. It is noted in [2] that the features employed for classification will directly affect the final discrimination performance. In [2], the author chooses the color moment and gray-level co-occurrence matrix as image features. In [3], the word-based color histogram features are extracted from YCbCr color space, and then, the support vector machine is used as the classifier. In [4], the texture features based on gray-level co-occurrence matrix are employed from the discrete wavelet transform sub-bands in the HSV spaces. In [5], the CEIs are color-rotated so as to boost the chromatic attributes of ulcer areas and the ULBP features of the CEIs are extracted from the RGB space. The authors of [6] propose a method for distinguishing diseased CEIs from healthy CEIs based on contourlet transform and local binary pattern (LBP). The authors of [7] extract five color features in the HSV color space to differentiate between healthy and non-healthy images. In [8], an automatic detection method is proposed based on color statistical features extracted from histogram probability. It is worth mentioning that these feature extraction methods are either based on the full image or its low-frequency part, and no consideration is taken into the middle and high-frequency parts of the images that actually contain abundant texture information. Very recently, the authors of [9] investigated the wireless capsule endoscopy video and proposed a detection method based on higher and lower order statistical features. The rest of the paper is organized as follows: Section 2 describes the classification algorithm proposed in the paper. Section 3 details the feature extraction method used for extracting color and texture features, respectively. In Section 4, the construction of the BPNN is explained. Section 5 reports experimental results. Finally, the concluding remarks are presented in Section 6. It is well known that CEIs contain rich color and texture information. The lesion and non-lesion areas have significant color and texture differences. To this end, in the paper, a novel feature extraction method based on the color histogram, wavelet transform, and co-occurrence matrix is developed with the aim of improving the classification accuracy. The color and texture features are, respectively, extracted by the improved color histogram and the co-occurrence matrix based on wavelet transform. The CEIs used in the paper are divided into the training and testing sets. CEIs in the training set are used to train the BPNN (back propagation neural network), and those in the testing set are used for classification. The extracted feature values of the CEIs in the training set are normalized as the inputs of the BPNN and the classification results of the testing set are achieved by the trained BPNN. Simulation experiments show that the proposed algorithm can effectively divide the CEIs into two categories, i.e., healthy and diseased images, with high correctness. As shown in Fig. 1, the algorithm proposed in the paper includes three steps: (1) extracting the color features: (a) transforming the CEIs from the RGB to HSV spaces, (b) calculating the color histogram after quantization, and (c) selecting the appropriate bins and then constructing the color feature vector; (2) extracting the texture features: (a) selecting the middle and high-frequency sub-bands and then reconstructing images through the wavelet transform, and (b) computing the characteristic values of the co-occurrence matrix and constructing the texture feature vector; (3) training and then classification: (a) normalizing the color and texture features of the training images and then training the BPNN and (b) using the trained BPNN to classify the testing images. Flow chart of the proposed algorithm Feature extraction Extracting color features Since conventional color histogram [10] has a high feature dimension, and thus is not conducive to classification, an improved color histogram method is proposed and used to extract the color features of the CEIs. It is known that the HSV color space is a kind of natural representation color model and thus can better reflect the physiological perception of the human eyes [11]. Therefore, it is proposed that the H, S, and V components are quantified nonuniformly according to the color perception characteristics of humans in the HSV color space. Then, the color histogram is calculated, followed by the construction of the color feature vector after selecting the appropriate bins from the calculated color histogram. The H, S, and V components of a CEI denoted, respectively, by h, s, and v, are quantified by using Eq. (1) [12]. $$ {h}_q=\left\{\begin{array}{l}0\kern0.5em \mathrm{if}\kern0.5em h\in \left(315,20\right]\\ {}1\kern0.5em \mathrm{if}\kern0.5em h\in \left(20,40\right]\\ {}2\kern0.5em \mathrm{if}\kern0.5em h\in \left(40,75\right]\\ {}3\kern0.5em \mathrm{if}\kern0.5em h\in \left(75,155\right]\\ {}4\kern0.5em \mathrm{if}\kern0.5em h\in \left(155,190\right]\\ {}5\kern0.5em \mathrm{if}\kern0.5em h\in \left(190,270\right]\\ {}6\kern0.5em \mathrm{if}\kern0.5em h\in \left(270,295\right]\\ {}7\kern0.5em \mathrm{if}\kern0.5em h\in \left(295,315\right]\end{array}\right.\kern0.5em {s}_q=\left\{\begin{array}{l}0\kern0.5em \mathrm{if}\kern0.5em s\in \left(0,0.2\right]\\ {}1\kern0.5em \mathrm{if}\kern0.5em s\in \left(0.2,0.7\right]\\ {}2\kern0.5em \mathrm{if}\kern0.5em s\in \left(0.7,1\right]\end{array}\right.\kern0.5em {v}_q=\left\{\begin{array}{l}0\kern0.5em \mathrm{if}\kern0.5em v\in \left(0,0.2\right]\\ {}1\kern0.5em \mathrm{if}\kern0.5em v\in \left(0.2,0.7\right]\\ {}2\kern0.5em \mathrm{if}\kern0.5em v\in \left(0.7,1\right]\end{array}\right. $$ In order to reduce the feature dimension, the three color components are synthesized into a one-dimensional feature vector ϕ [13], giving $$ \phi ={h}_qQ\mathrm{s} Qv+{s}_q Qv+{v}_q, $$ where Qs and Qv are the quantitative levels of S and V components, respectively. According to the quantitative levels calculated by Eq. (1), Qs and Qv are set to 3, and then, Eq. (2) can be rewritten as $$ \phi =9{h}_q+3{s}_q+{v}_q, $$ where ϕ ∈ [0, 1, 2, … , 71]. According to Eq. (3), we can obtain a characteristic histogram with 72 bins. Figure 2 presents the quantified color histogram of the case image Q. Here, the case image Q is randomly selected from the diseased images in the training set. In Fig. 2, Fϕ represents the ratio of numbers of the pixels with characteristic value ϕ to the number of all the pixels in the image matrix after quantization. Of note is that the dimension of Fϕ is still high, and the 72 characteristic values contain lots of 0, causing redundancy and thus being not conducive to classification. To this end, an improved color histogram is proposed. In the paper, 30 healthy and 30 diseased CEIs are randomly selected as the sample images to calculate their color histograms. For a CEI, if Fϕ > 1/72, the corresponding ϕ is recorded. Thus, less than 72 values would be recorded. In the paper, for a CEI, only the largest 15 values of Fϕ are selected and then constructed as the color feature vector. By employing this method, we can obtain the color feature vector of the case image Q as given in Eq. (4). $$ {F}_{\mathrm{color}}=\left[F1,F2,F3,F5,F6,F10,F11,F14,F23,F28,F37,F46,F47,F56,F65\right] $$ The color histogram of the case image Q: a case image Q and b quantized color histogram of the case image Q Extracting texture features It is known that the lesion areas of a CEI are significantly different from those of non-diseased regions. Therefore, extracting texture features of CEIs is of crucial importance to the design of a practical classification algorithm. In the paper, the pyramid wavelet decomposition is adopted and the Daubechies function is chosen as the basis function of wavelet transform which are widespread employed in the literature, e.g., [9, 10]. Figure 3 is a schematic diagram of the case image Q, where a three-level wavelet decomposition is used. Denote by $$ {Q}^i=\left\{{L_{\alpha}}^i,{H_{\beta}}^i\right\},\kern0.5em i=1,2,3\kern0.5em \alpha =1,2,3\kern0.5em \beta =1,\dots, 9, $$ the decomposed version of the case image Q, where L denotes the low-frequency parts of the horizontal and vertical components of a CEI, H denotes the corresponding middle and high-frequency parts, α represents the decomposition level, β stands for the wavelet band, and i represents the color channel. A schematic diagram of the case image Q of a three-level wavelet decomposition: a case image of wavelet decomposition and b schematic diagram of a three-level wavelet decomposition Note that conventional computer-aided analysis methods mostly operate on the low-frequency band. However, the texture and edge information are mainly concentrated on the middle and high-frequency bands. Therefore, in the paper, the middle and high-frequency sub-bands are selected to reconstruct the image, and then, the texture information is extracted accordingly. Let Oi be the reconstructed CEI. For each color channel, we have $$ {O}^i=\mathrm{IDWT}\left\{{H_{\beta}}^i\right\},\kern0.5em i=1,2,3\kern0.5em \beta =1,2,\cdots, 9. $$ Here, IDWT{⋅} denotes the inverse discrete wavelet transformation, β stands for the wavelet band, and i represents the color channel. Calculate the co-occurrence matrix \( {\mathbf{W}}_T^{\theta}\left(m,n\right) \) of the R, G, and B channels of the reconstructed CEI, where T ∈ {R, G, B}, the value of pixel pair (m, n) of the co-occurrence matrix represents the number of occurring times of the two pixels with distance d and having color levels m and n at a given direction θ. In practice, θ is commonly set to 0°, 45°, 90°, or 135°. It reflects not only the distribution characteristic of brightness, but also the position distribution characteristic of pixels with the same or similar brightness. It is the two-order statistical feature of the change of image brightness [14]. Next, normalize the co-occurrence matrix and let \( {w}_T^{\theta}\left(m,n\right) \) be the value of a pixel pair (m, n) of the normalized co-occurrence matrix, where T ∈ {R, G, B} and θ ∈ {0∘, 45∘, 90∘, 135∘}. In the paper, we select four commonly used features, namely, angular second moment, contrast, entropy, and correlation, from all the features of the co-occurrence matrix [15]. The angular second moment, contrast, entropy, and correlation, respectively, represent the homogeneity, inertia, randomness and directional linearity of the co-occurrence matrix, and are defined, respectively, as $$ {E}_T^{\theta }=\sum \limits_{m=1}^D\sum \limits_{n=1}^D{\left[{w}_T^{\theta }(m.n)\right]}^2 $$ $$ {I}_T^{\theta }=\sum \limits_{m=1}^D\sum \limits_{n=1}^D{\left(m-n\right)}^2{w}_T^{\theta}\left(m,n\right) $$ $$ {\Pi}_T^{\theta }=-\sum \limits_{m=1}^D\sum \limits_{n=1}^D{w}_T^{\theta}\left(m,n\right)\log {w}_T^{\theta}\left(m,n\right) $$ $$ {A}_T^{\theta }=\frac{\sum \limits_{m=1}^D\sum \limits_{n=1}^D(m.n){w}_T^{\theta}\left(m,n\right)-{\mu}_1^{\theta }{\mu}_2^{\theta }}{\sigma_1^{\theta }{\sigma}_2^{\theta }} $$ where \( {\mu}_1^{\theta } \), \( {\mu}_2^{\theta } \), \( {\sigma}_1^{\theta } \), and \( {\sigma}_2^{\theta } \) are defined as the following: $$ {\mu}_1^{\theta }=\sum \limits_{m=1}^Dm\sum \limits_{n=1}^D{w}_T^{\theta}\left(m,n\right), $$ $$ {\mu}_2^{\theta }=\sum \limits_{\mathrm{n}=1}^D\mathrm{n}\sum \limits_{m=1}^D{w}_T^{\theta}\left(m,n\right), $$ $$ {\sigma}_1^{\theta }=\sum \limits_{m=1}^D{\left(m-{\mu_1}^{\theta}\right)}^2\sum \limits_{n=1}^D{w}_T^{\theta}\left(m,n\right), $$ $$ {\sigma}_2^{\theta }=\sum \limits_{n=1}^D{\left(n-{\mu}_2^{\theta}\right)}^2\sum \limits_{m=1}^D{w}_T^{\theta}\left(m,n\right). $$ Here, D is the maximum color level of a CEI. It is worth mentioning that homogeneity, inertia, randomness, and directional linearity of co-occurrence matrix are widespread employed to construct the texture feature vector of CEIs in the literature, e.g., references [2, 4], and the references therein. In the paper, d = 1 is assumed. According to the characteristic values calculated above, the texture feature vector of a CEI can be constructed by $$ {Z}_T=\left[\overline{E_T^{\theta }},\widehat{\ {E}_T^{\theta }},\overline{I_T^{\theta }},\widehat{I_T^{\theta }},\overline{\ {\Pi}_T^{\theta }},\widehat{\ {\Pi}_T^{\theta }},\overline{A_T^{\theta }},\widehat{A_T^{\theta }}\right], $$ where \( \overline{X_T^{\theta }}=\frac{\sum \limits_{\theta \in \left\{{0}^{\circ },{45}^{\circ },{90}^{\circ },{135}^{\circ}\right\}}{X}_T^{\theta }}{4} \), \( \widehat{X_T^{\theta }}=\sqrt{\frac{\sum \limits_{\theta \in \left\{{0}^{\circ },{45}^{\circ },{90}^{\circ },{135}^{\circ}\right\}}{\left({X}_T^{\theta }-\overline{X_T^{\theta }}\right)}^2}{4}} \), X ∈ {E, I, Π, A}, T ∈ {R, G, B}, andθ ∈ {0∘, 45∘, 90∘, 135∘}. The eight-dimensional texture features of the R, G, and B channels obtained above are added correspondingly as the final extracted texture features, and the expression is given by Eq. (16). $$ {F}_{\mathrm{texture}}={Z}_R+{Z}_G+{Z}_B $$ Training and classification BPNN is a feedforward neural network with a tutor, having strong nonlinear mapping ability and a flexible network structure. The main idea behind the proposed algorithm is to use the samples of the known results to train the network, and then adopt the trained network to recognize and classify the images. The BPNN consists of the input, hidden and output layers. The hidden layer can have one or multiple layers. Neurons in adjacent layers are connected by weights, but there is no connection between the neurons in each layer. The structure of the commonly used three-layer BPNN is shown in Fig. 4. The structure of a three-layer BPNN The training phase of BPNN is mainly divided into two steps: forward and backward propagation steps. During the forward propagation step, the feature values of the training samples reach the hidden layer through the nonlinear transformation from the input layer and then to the output layer, leading to the output results. Compare the output results with the expected outputs, if they are not equal, then enter the step of back propagation. During the backward propagation step, the error signals propagate layer by layer from the output layer to the input layer through the hidden layer and reduce errors by adjusting weights. In principle, the BP algorithm uses the square of the network errors as the objective function and uses the gradient descent method to calculate the minimum value of the objective function. The experiment was conducted through MATLAB R2016a. The Daubechies function is chosen as the basis function of the wavelet transform, and the max decomposition level is set to 3. The comprehensively extracted color features and texture features are finally used as the BPNN input feature vectors as given in Eq. (17). $$ F=\left[{F}_{\mathrm{color}},{F}_{\mathrm{texture}}\right] $$ The image library in this paper is provided by the Hangzhou Hitron Technologies Co., Ltd., including 1251 stomach capsule clinical images; the resolution is 480 × 480; and the image format is bmp. Among the images, there are 135 diseased and 1116 healthy images. In each experiment, 108 diseased and 893 healthy images are randomly selected to form a training set with a total number of 1001, and the remaining 250 images are used as the BPNN testing set. According to the previous experience, as long as there are suitable hidden layer nodes, a neural network with a single hidden layer can approximate any continuous function on the bounded domain with arbitrary precision. Therefore, the number of layers of the BPNN is set to 3 in the experiment. The number of hidden nodes is usually given by empirical formula [16] $$ \omega =\sqrt{M+P}+\delta, $$ where M and P denote the number of input and output layer nodes, respectively, and δ is a constant and its value is between 1 and 10. In the experiment, M = 23 and P = 1 are assumed, and thus, ω is an integer ranged from 6 to 15. Set the learning rate, the maximum training epoch, and the expected training error to 0.01, 5000, and 0.0001, respectively. The experiment was carried out with different ω values within the range. Figure 5 presents some selecting experimental results that can give insights on how to set the value of ω. Comparison of classification results of BPNN with different ω values It can be observed from Fig. 5 that ω is appropriately set to 15 in the following experiments. The classification results of the proposed algorithm are compared with the existing methods, as shown in Table 1, where the values of TPR (true positive rate) and TNR (true negative rate) are calculated by Eqs. (19) and (20). $$ \mathrm{TPR}=\frac{\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{images}\ \mathrm{correctly}\ \mathrm{recognized}\ \mathrm{as}\ \mathrm{diseased}}{\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{diseased}\ \mathrm{images}} $$ $$ \mathrm{TNR}=\frac{\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{images}\ \mathrm{correctly}\ \mathrm{recognized}\ \mathrm{as}\ \mathrm{healthy}}{\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{healthy}\ \mathrm{images}} $$ Table 1 Performance comparison between the proposed algorithm and the existing methods "Method 1" means that in each channel of the image in the HSV space, the wavelet transform is used to select the L, H4,H5, and H6 bands while the max decomposition level is set to 2 and the images are reconstructed accordingly, then compute the characteristic values of the co-occurrence matrix of the images and use the BPNN to recognize [4]. "Method 2" means that the extracted features will be filtered according to the average influence value and then classified by SVM [2]. "Method 3" and "Method 4" choose Fcolor and Ftexture as the input of BPNN. It confirmed that the proposed method is superior to the existing methods in terms of both practicability and accuracy, and its accuracy of 99.12% can well meet the clinical requirements. The paper proposed a novel method to extract image features for the classification of capsule endoscopy images, where the color histogram is used to extract the color features and the gray-level co-occurrence matrix based on wavelet transform is used to extract the texture features. Using the BPNN classifier, the proposed method achieves an accuracy of 99.12%, which is superior to the existing methods. During the investigation, an interesting and practical problem arises, how to recognize or classify the types of diseases according to the provided CEIs, which will be investigated in our future work. BPNN: Back propagation neural network CEI: Capsule endoscopy image HSI: Hue, saturation, intensity HSV: Hue, saturation, value RGB: Red, green, blue TNR: True negative rate TPR: True positive rate ULBP: Uniform local binary pattern D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). https://doi.org/10.1109/TIT.2006.871582 J. Deng, L. Zhao, Image classification model with multiple feature selection and support vector machine. J. Jilin Univ. (Science Edition). 54(4), 862–866 (2016). https://doi.org/10.13413/j.cnki.jdxblxb.2016.04.33 Y. Yuan, B. Li, Q. Meng, Bleeding frame and region detection in the wireless capsule endoscopy video. IEEE J. Biomed. Health. 20(2), 624–630 (2016). https://doi.org/10.1109/JBHI.2015.2399502 D.J. Barbosa, J. Ramos, C.S. Lima, Detection of small bowel tumors in capsule endoscopy frames using texture analysis based on the discrete wavelet transform. 30th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc, 1102–1105 (2008). https://doi.org/10.1109/IEMBS.2008.4649837 V.S. Charisis, C. Katsimerou, L.J. Hadjileontiadis, C.N. Liatsos, G.D. Sergoados, Computer-aided capsule endoscopy images evaluation based on color rotation and texture features: an educational tool to physicians. 2013 IEEE Int. Sym. CBMS, 203–208 (2013). https://doi.org/10.1109/CBMS.2013.6627789 M. Mathew, V.P. Gopi, Transform based bleeding detection technique for endoscopic images. ICECS 2015, 1730, 2015–1734. https://doi.org/10.1109/ECS.2015.7124882 S. Suman, F.A.B. Hussin, N. Walter, et al., Detection and classification of bleeding using statistical color features for wireless capsule endoscopy images. IEEE Int. Conf. Signal Info. Proces., 1–5 (2017). https://doi.org/10.1109/ICONSIP.2016.7857440 S. Sainju, F.M. Bui, K. Wahid, Bleeding detection in wireless capsule endoscopy based on color features from histogram probability. IEEE Canadian Conf. Electr. Comput. Eng., 1–4 (2013). https://doi.org/10.1109/CCECE.2013.6567779 T. Ghosh, S.A. Fattah, K.A. Wahid, Automatic computer aided bleeding detection scheme for wireless capsule endoscopy (WCE) video based on higher and lower order statistical features in a composite color. J. Med. Biol. Eng. 38(2), 482–496 (2018). https://doi.org/10.1007/s40846-017-0318-1 D. Sudarvizhi, Feature based image retrieval system using Zernike moments and Daubechies Wavelet Transform. Int. Conf. Recent Trends Info. Technol., 1–6 (2016). https://doi.org/10.1109/ICRTIT.2016.7569541 N. Suciati, D. Herumurti, A.Y. Wijava, Fractal-based texture and HSV color features for fabric image retrieval. IEEE Int. Conf. Control Syst. Comput. Eng., 178–182 (2015). https://doi.org/10.1109/ICCSCE.2015.7482180 X. Yu, M. Shen, The uniform and non-uniform quantification effects on the extraction of color histogram. J Qinghai Univ (Natural Science Edition). 33(1), 68–67 (2015). https://doi.org/10.13901/j.cnki.qhwxxbzk.2015.01.014 R. Jain, P.K. Johari, An improved approach of CBIR using color based HSV quantization and shape based edge detection algorithm. IEEE Int. Conf. Recent Trends Elec. Info. Commun. Tech. (RTEICT), 1970–1975 (2016). https://doi.org/10.1109/RTEICT.2016.7808181 F. Zhu, B. Zhu, P. Li, Z. Wang, L. Wei, Quantitative analysis and identification of liver B-scan ultrasonic image based on BP neural network. Int. Conf. Optoelectron Microelectron., 62–66 (2013). https://doi.org/10.1109/ICoOM.2013.6626491 R.M. Haralick, Statistical and structural approaches to texture. Proc. IEEE 67(5), 786–804 (1979). https://doi.org/10.1109/PROC.1979.11328 D. Weng, R. Chen, Y. Li, D. Zhao, Techniques and applications of electrical equipment image processing based on improved MLP network using BP algorithm. Power Electron. Motion Control Conf., 1102–1105 (2016). https://doi.org/10.1109/IPEMC.2016.7512441 This work is supported in part by the National Natural Science Foundation of China (Grant Nos. 61401238 and 61871241) and by the Nantong University-Nantong Joint Research Center for Intelligent Information Technology (Grant No. KFKT2017A03). The capsule endoscopy images (CEIs) used are provided by the Hangzhou Hitron Technologies Co., Ltd. For any other data and materials, please request it from the authors. School of Electronics and Information, Nantong University, No.9, Seyuan Road, Chongchuan District, Nantong, 226019, Jiangsu Province, China Xiaodong Ji , Tingting Xu , Wenhua Li & Liyuan Liang Nantong Research Institute for Advanced Communication Technologies, No.9, Seyuan Road, Chongchuan District, Nantong, 226019, Jiangsu Province, China Wenluo Corporation of Electronic Science and Technology of Jiangsu, No.9, Seyuan Road, Chongchuan District, Nantong, 226019, Jiangsu Province, China Wenhua Li Search for Xiaodong Ji in: Search for Tingting Xu in: Search for Wenhua Li in: Search for Liyuan Liang in: All the authors take part in the discussion of the work described in this paper. The author XJ and TX wrote the first version of the paper. The author TX did the experiments of the paper. XJ, WL, and LL revised the paper. All the three authors worked closely during the preparation and writing of the manuscript. All the authors read and approved the final manuscript. Correspondence to Xiaodong Ji. Ji, X., Xu, T., Li, W. et al. Study on the classification of capsule endoscopy images. J Image Video Proc. 2019, 55 (2019) doi:10.1186/s13640-019-0461-4 Accepted: 09 April 2019 Wavelet transform Color histogram Co-occurrence matrix
CommonCrawl
Preprint gmd-2022-225 https://doi.org/10.5194/gmd-2022-225 Submitted as: model description paper Submitted as: model description paper | 27 Sep 2022 Status: a revised version of this preprint is currently under review for the journal GMD. A mixed finite element discretisation of the shallow water equations James Kent1, Thomas Melvin1, and Golo Albert Wimmer2 James Kent et al. James Kent1, Thomas Melvin1, and Golo Albert Wimmer2 1Dynamics Research, Met Office, Exeter, UK 2Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Received: 14 Sep 2022 – Discussion started: 27 Sep 2022 Abstract. This paper introduces a mixed finite-element shallow water model on the sphere. The mixed finite-element approach is used as it has been shown to be both accurate and highly scalable for parallel architecture. Key features of the model are an iterated semi-implicit time stepping scheme, a finite-volume transport scheme, and the cubed sphere grid. The model is tested on a number of standard spherical shallow water test cases. Results show that the model produces similar results to other shallow water models in the literature. How to cite. Kent, J., Melvin, T., and Wimmer, G. A.: A mixed finite element discretisation of the shallow water equations, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd-2022-225, in review, 2022. James Kent et al. Status: final response (author comments only) Login for authors/topical editors Subscribe to comment alert CEC1: 'Comment on gmd-2022-225', Juan Antonio Añel, 25 Oct 2022 Dear authors, Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy". https://www.geoscientific-model-development.net/policies/code_and_data_policy.html You have archived the code of your shallow water model on GitHub. However, GitHub is not a suitable repository. GitHub itself instructs authors to use other long-term archival and publishing alternatives, such as Zenodo. Therefore, please, publish your code in one of the appropriate repositories according to our policy, and reply to this comment with the relevant information (link and DOI) as soon as possible, as it should be available for the Discussions stage. Also, in a potential reviewed version of your manuscript, you must include the modified 'Code and Data Availability' section with the DOI of the new repository. Moreover, for the MetOffice code and data, you must be much more specific. You must provide the exact version number for the code and its DOI. For the data, if it can not be stored outside the MetOffice servers, you should provide a DOI, too, to ensure that it is easy to identify and obtain. Please, be aware that failing to comply promptly with this request could result in rejecting your manuscript for publication. Juan A. Añel Geosci. Model Dev. Exec. Editor Citation: https://doi.org/10.5194/gmd-2022-225-CEC1 AC1: 'Reply on CEC1', James Kent, 21 Dec 2022 We have now uploaded the model code to Zenodo. The shallow water model code, at LFRic revision r39707, can be found on Zenodo with DOI: We have added this to the Code and Data Availability section. Citation: https://doi.org/10.5194/gmd-2022-225-AC1 RC1: 'Comment on gmd-2022-225', Anonymous Referee #1, 26 Oct 2022 The article presents a new numerical method for solving the shallow water equations on the sphere that employs mimetic finite element methods on a cubed-sphere grid. It represents a significant advance from previous work, extending the methods to spherical geometry as a step toward development of an accurate and computationally efficient atmospheric dynamical core for operational use. As a replacement for an operational model that uses a latitude-longitude grid, the article correctly suggests that this method is capable of significant improvements in parallel scalability on the latest computing architectures. Results from the standard test cases are presented, and demonstrate that the model performs appropriately. Advanced numerical techniques such as mimetic finite element methods preserve significant properties of the continuous equations in their discrete form and are capable of achieving high efficiency in the massively parallel simulations required by high resolution atmospheric models. The article represents an important step toward development of such a method, and I recommend it for publication with minor revisions, as suggested below. While mimetic methods are becoming more common, it seems a lot to ask of the interdisciplinary GMD audience to follow a discussion of function spaces and de Rham complexes without some assistance. An illustration such as Figure 4 from Melvin et al. (2018) that corresponds to the specific cases mentioned by equations (7) and (8) would be most helpful. Similarly, the article could benefit from a quick reminder of why mixed finite element methods are useful, especially since the lowest order formulation is used here. Differences between finite element methods, finite volume methods, and finite difference methods often disappear when used with low order discretizations. What is gained here that would not be present in a staggered finite volume method such as the one presented by Thuburn et al. (2014)? Given the emphasis given to computational efficiency in the introduction, I expected more discussion of the method's computational performance. Detailed scaling studies are likely unrealistic this stage of development, but some general discussion would be helpful. Are the expected gains strictly due to the choice of grid, i.e., cubed sphere vs.~latitude-longitude? Or is the numerical method helpful, too, for example, are its stencils for field reconstruction (e.g., Figure 1) smaller than other methods, implying less communication is required during runtime? Equations (5) and (6) suggest that the advecting velocity is $\overline{\vec{u}}^{1/2}$ even in cases where $\alpha \ne 1/2$; is this true? Wood et al. (2014) suggest that off-centering by setting $\alpha > 1/2$ is important in the context of a 3D deep atmosphere model with orography. Is that concern relevant here, given that the method is presented as a step toward a full 3d atmosphere model? How many iterations of GMRES are typically required to solve (33)? How sensitive is this number to the resolution? The description in Section 6 of a ``finite-element representation of the sphere within a cell with polynomials'' is difficult to follow. I assume that the four vertices of an element lie on the sphere; for the case of a quadratic elements, are the nodes that are not vertices also on the sphere? Is the fact that some internal points of a cell may not exactly lie on the spherical surface related to the fact that different function spaces are used for different variables? It doesn't seem to be an issue with other finite-element dynamical cores such as Guba et al. (2014), that also rely on mappings to and from a reference quadrilateral. I found the discussion of error at the beginning of Section 7.1 confusing; it states that the method is second-order in both space and time, but immediately preceding this remark at the end of Section 6, fourth-order convergence is cited as the reason for choosing quadratic elements. I agree that the method should be overall second order, so I presume that the fourth-order convergence refers to reconstructing the spherical surface itself, rather than an arbitrary scalar function? T. Melvin, T. Benacchio, J. Thuburn, and C. Cotter, 2018, Choice of function spaces for thermodynamic variables in mixed finite element methods, Q. J. Roy. Met. Soc. 144:900--916. J. Thuburn, C. J. Cotter, and T. Dubos, 2014, A mimetic, semi-implicit forward-in-time, finite volume shallow water model: comparison of hexagonal-icosahedral and cubed-sphere grids, Geosci. Model Dev. 7:909--929. N. Wood, A. Staniforth, A. White, et al., 2014, An inherently mass-conserving semi-implicit semi-Lagrangian discretization of the deep-atmosphere global non-hydrostatic equations, Q.~J.~Roy.~Met.~Soc. 140:1505--1520. O. Guba, M.A. Taylor, P.A. Ullrich, J.R. Overfelt, and M.N. Levy, The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity, Geosci. Model Dev. 7:2803--2816. Citation: https://doi.org/10.5194/gmd-2022-225-RC1 AC2: 'Reply on RC1', James Kent, 21 Dec 2022 In response to your general comments: We have produced a figure that shows the mixed finite element spaces used in the model and have included it in section 3.2 of the manuscript. We agree that at lowest order these types of methods often become very similar. However, in Thuburn and Cotter, JCP, 2015, it is shown that the lowest order FE discretization has more benefit when it comes to consistency of the Coriols on non-orthogonal meshes than the FV model of Thuburn et al 2014. Another benefit of FE is the flexibilty to go to a higher-order element model. We have included this discussion in the introduction of our manuscript. We've added some discussion to the conclusions. We highlight that the cubed sphere grid has fewer cells than a corresponding lat-lon grid, and that the cubed sphere removes the pole and associated issues with parallel computing. Regarding the stencil size, the MoL transport scheme uses a small stencil for each reconstruction, but it must compute a reconstruction for each stage of the RK scheme. It is not clear at this stage whether this improves communication cost when compared to a scheme with a large stencil that is only called once. In response to your specific comments: 1) This is true, even if alpha $\neq 1/2$. This is consistent with Wood et al. 2014, and is used to get the second-order time discretization. Currently we use alpha=0.5 in the model configuration. For shallow water we have not seen the need to off-centre. We agree that this is important for a full 3D model. 2) It seems to take around 2-3 iterations for GMRES to converge to a tolerance of 10^-4 on the C24 and C48 grids for both the mountain and Galewsky test. We have stated this at the end of section 5. 3) We have rewritten parts of this section, including adding a sentence describing a linear element to make things clearer. For the quadratic elements all the nodes lie on the sphere. 4) The different function spaces is not why we use the sphere parameterisation. Representing the sphere with elements removes the need for analytic transformations. This means we can use an arbitrary grid (although in this paper we only consider the cubed sphere grid). A down side is we are parameterizing the sphere, but as shown using quadratic elements on the C96 grid gives a maximum error of 0.0018 m. 5) You are correct that the fourth-order is for the spherical surface, and the second-order is for the Williamson 2 test case. We have edited the text here to make the distinction clearer. RC2: 'Comment on gmd-2022-225', Hilary Weller, 30 Oct 2022 Generally, a referee comment should be structured as follows: an initial paragraph or section evaluating the overall quality of the preprint ("general comments"), followed by a section addressing individual scientific questions/issues ("specific comments"), and by a compact listing of purely technical corrections at the very end ("technical corrections": typing errors, etc.). This paper clearly presents the shallow water model which uses some of the numerical methods that will be used in the next Met Office dynamical core. It is therefore an important model description paper. It brings together mixed-finite element modelling of the second-order wave equations, finite-volume modelling of transport and semi-implicit time stepping. The paper is concise and easy to follow, drawing on other published work where needed in order to define the model, although some clarifications are still needed. The results are clearly presented and, at this stage, nearly comprehensive. The motivation for this new model could be a lot stronger. Much of the motivation provided could have been written last century, for example the need for parallisation and the need to go beyond finite differences, finite volume and semi-Lagrangian. The motivation for mixed finite elements is easy and has already been written about. The motivation needs in involve massive parallelisation, wave dispersion, spectral elements and DG. Section 4 needs to define the order of accuracy in space of the transport scheme. I think it must be limited to two because you do not define how you fit a polynomial using cell average values. Figure 4 and the related discussion (lines 239-243) are weak. Figure 4 only really shows that your model works. It doesn't, as you say, show that the "results are comparable to other shallow water models" or demonstrate "the model's ability to correctly simulate flow over orography". I would plot errors rather than figure 4 (in comparison to STSWM) and convergence with resolution. It is also informative to show the vorticity after 50 days which is a good indicator of conservation, balance and a lack of spurious artefacts in the solution. Eg see: Fig 11 of "A unified approach to energy conservation and potential vorticity dynamics for arbitrarily-structured C-grids", Journal of Computational Physics 229 (2010) 3065–3090 or fig 5 of "Computational Modes and Grid Imprinting on Five Quasi-Uniform Spherical C-Grids", Weller, Thuburn and Cotter. Technical Corrections Try to make your writing more concise. For example, delete phrases like "and the interested reader is referred there for more information". Please also see Shaw, J., Weller, H., Methven, J. and Davies, T. (2017) Multidimensional method-of-lines transport for atmospheric flows over steep terrain using arbitrary meshes. Journal of Computational Physics, 344. pp. 86-107. ISSN 0021-9991 for a description of the creation of stencils and polynomials for this type of transport scheme. In table 1, use scientific notation rather than exponents. Regarding motivation, we have rewritten parts of the introduction to takes these points into account. We have stressed the need for massively parallel models for the future of weather and climate forecasting, and have discussed the benfits of mixed finite-element methods over finite-volume. The transport scheme in section 4 is actually 3rd order in space and time. The temporal order comes from the SSPRK3 algorithm. The spatial order comes from the quadratic reconstruction of the field at flux points. The fitting of the polynomial is such that the integral of the polynomial is equal to the integral of the variable within each cell. We have made this clearer in the text in section 4. We have significantly rewritten large parts of the mountain test case section. We use a high-resolution semi-implicit semi-Lagrangian scheme as a reference to produce error plots, which we then compare with other models in the literature. We also look at the error convergence with resolution. We have extended the energy and potential enstrophy statistics to 50 days, and provided a plot of the day 50 potential vorticity. We have removed the text "and the interested reader is referred there for more information" and have used scientific notation in the error norm table. 452 120 19 591 8 5 Dec 2022 85 21 6 112 This paper introduces the Met Office's new shallow water model. The shallow water model is a building block towards the Met Office's new atmospheric dynamical core. The shallow water model is tested on a number of standard spherical shallow water test cases, including flow over mountains and unstable jets. Results show that the model produces similar results to other shallow water models in the literature. This paper introduces the Met Office's new shallow water model. The shallow water model is a...
CommonCrawl
Cogitationes ex mentis et machina Play Tic-tac-toe with Arthur Cayley! Submitted by Marc on Fri, 02/05/2016 - 22:51 Tic-tac-toe, (or noughts and crosses or Xs and Ox), is a turn-based game for two players who alternately tag the spaces of a $3 \times 3$ grid with their respective marker: an X or an O. The object of the game is to place three markers in a row, either horizontally, vertically, or diagonally. Given only the mechanics of Tic-tac-toe, the game can be expressed as Combinatorial Group by defining a set $A$ of generators $\{a_i\}$ which describe the actions that can be taken by either player. The Cayley Graph of this group can be constructed which will express all the possible ways the game can be played. Using the Cayley Graph as a model, it should be possible to learn the Tic-tac-toe game tree using dynamic programming techniques (hint: the game tree is a sub-graph of the Cayley Graph). Before going any further, it is important to understand the structure of the Tic-tac-toe group. Tic-tac-toe is expressed as a finite combinatorial group on the set, $S$, of $4^9$ possible board positions: the 9 grid locations which can be empty or contain an X, an O, or the superposition of X and O, $\ast$. The generator set, $A$, is a proper subset of $S$ with a cardinality of 10; the tagging of each of the 9 grid locations with a marker, and the empty grid (not playing is also a valid play). The identity element of the group is the empty grid, $\varnothing$, which is also the initial configuration in the game. The group law is the bijective group operation which combines an initial state with an action to produce the final state, and is expressed as follows: $$ p: S \times S \mapsto S $$ $$ p(S,S) = \{ s, s' \in S : s \cdot s' = s_{ij} \cdot s'_{ij} \} $$ In other words, the application of the group law will evaluate the dot-product of each grid cell location. The dot-product of grid cells is defined as follows: $$ s_{ij} \cdot s'_{ij} = \left\{ \begin{array}{lr} s_{ij} & \quad s_{ij} \neq \varnothing \land s'_{ij} = \varnothing \\ s'_{ij} & \quad s_{ij} = \varnothing \land s'_{ij} \neq \varnothing \\ \ast & \quad s_{ij} = \overline{s'_{ij}} \\ \varnothing & \quad s_{ij} = s'_{ij} \\ \overline{s_{ij}} & s'_{ij} = \ast \land s_{ij} \neq \varnothing \end{array} \right . The product of a marker with an empty cell tags the cell with the marker, two different markers will tag the cell with the superposition of both ($\ast$). The product of two similar markers will tag the cell as empty, therefore the group law described here is an autoinverse; this means that applying the law to a position with itself will result in the identity element. The group $E$ is expressed as $\langle A|p \rangle$, and its full state space is specified by repeated applications of the generator. The fact that $E$ is a group can be asserted by verifying that it satisfies the group axioms: Totality: The set is closed under the operation $p$. Associativity: The operation $p$ will combine any two positions in $S$ and yield another position in $S$. Identity: There exists an identity element. Divisibility: For each element in the group, there exists an inverse which yields the identity element when the group law is applied thereto. The proof that the group satisfies these axioms should be pretty evident. A formal proof of this fact is left as a future exercise. The state space can be further constrained by defining a more intelligent group law. The state set $S$ could be partitioned into two sub-sets: $S = X \cup O$; where $X$ is the set of positions which allow X to play, and $O$ is the set of positions which allow O to play (note that the intersection of $X$ and $O$ is not empty). This would simplify the Cayley Graph and thus reduce the time required to learn the game tree. However, this would greatly increase the complexity of the group law, making it more prone to error. The abstract structure of the Tic-tac-toe group can be encoded with a Cayley graph, $\Gamma$, where each of vertices represents a position, and the edges represent that possible transitions resulting from an agent making a move. The Cayley graph of the Tic-tac-toe group is isomorphic to the backup diagram of the approximate value function, $V^\pi(s)$. By extending the graph -- associating values for each of the vertices (states), and weights for the edges -- it can be used as an initial approximation of the value function. Dynamic programming algorithms will iteratively update the values and weights to obtain a better approximation of the optimal value function. By removing the edges that tend toward a zero probability of being followed, the resulting graph should be isomorphic to the game tree. Initially, the value of each state will be set to zero with the exception of winning states which will have high values, and losing states which have low values. Given the sets $W$ and $L$ which contain all the winning and losing positions respectively (note: $W \cap L = \emptyset$), the initial values could be assigned as follows: $$\forall s \in S \quad : \quad V^\pi(s) = \left\{ \gg 0 & \quad s \in W \\ \ll 0 & \quad s \in L \\ 0 & \quad s \notin W \cup L The Tic-tac-toe group allows for positions that are not valid in a regular game (i.e. the states with superpositions). These moves should be suppressed in the process of iteratively improving the approximation of the value function. To do this, the transitions leading to invalid positions could be assigned a very small weight, ensuring that the probability of following the edge tends toward zero. The same could be done to prevent actions which place a marker in a previously occupied grid cell: P( s \cdot a = s') = \left\{ 0 & \quad \exists i,j \in \mathbb{Z}/3 : \quad s'_{ij} \neq \varnothing \land a_{ij} \neq \varnothing \\ >0 & \quad \forall i,j \in \mathbb{Z}/3 : \quad s_{ij} = \varnothing \lor a_{ij} = \varnothing This will ensure that an agent using the Cayley graph as a value function approximation will generally not take actions leading to invalid states (which would be seen as a newbie error or an attempt at cheating by an opponent). The simplicity of the Tic-tac-toe problem make it a good pedagogical tool to learn about reinforcement learning. It is straightforward to write a computer program to play Tic-tac-toe perfectly, to enumerate the 765 essentially different positions (the state space complexity), or the 26,830 possible games up to rotations and reflections (the game tree complexity) on this space. [1] However, by designing a program which learns how to play rather than manually building the game tree, the relatively small state space makes it easier to validate the techniques and algorithms used. Additionally, the theoretical foundations should also be applicable to more complex problems with state spaces that are too large to hand build the associated game tree. In this article, the Tic-tac-toe problem was expressed in group theoretic terms. There is an entire body of work on group theory which may provide valuable tools for reasoning about dynamic programming algorithms used to learn approximations of the solutions to modelled problems. In future articles, the ideas developed herein will be tested by implementing them using the Didactronic toolkit. The goals of this endeavour are two-fold: 1) to validate the hypothesis that group theory provides a useful formalism for expressing reinforcement learning systems, and 2) to drive the development of the Didactronic Toolkit to make it more useful as a generalized machine learning framework. Cayley Graph Combinatorial Group
CommonCrawl
Performance of the marginal structural cox model for estimating individual and joined effects of treatments given in combination Clovis Lusivika-Nzinga1, Hana Selinger-Leneman1, Sophie Grabar1,2, Dominique Costagliola1 & Fabrice Carrat ORCID: orcid.org/0000-0002-8672-79181,3 BMC Medical Research Methodology volume 17, Article number: 160 (2017) Cite this article The Marginal Structural Cox Model (Cox-MSM), an alternative approach to handle time-dependent confounder, was introduced for survival analysis and applied to estimate the joint causal effect of two time-dependent nonrandomized treatments on survival among HIV-positive subjects. Nevertheless, Cox-MSM performance in the case of multiple treatments has not been fully explored under different degree of time-dependent confounding for treatments or in case of interaction between treatments. We aimed to evaluate and compare the performance of the marginal structural Cox model (Cox-MSM) to the standard Cox model in estimating the treatment effect in the case of multiple treatments under different scenarios of time-dependent confounding and when an interaction between treatment effects is present. We specified a Cox-MSM with two treatments including an interaction term for situations where an adverse event might be caused by two treatments taken simultaneously but not by each treatment taken alone. We simulated longitudinal data with two treatments and a time-dependent confounder affected by one or the two treatments. To fit the Cox-MSM, we used the inverse probability weighting method. We illustrated the method to evaluate the specific effect of protease inhibitors combined (or not) to other antiretroviral medications on the anal cancer risk in HIV-infected individuals, with CD4 cell count as time-dependent confounder. Overall, Cox-MSM performed better than the standard Cox model. Furthermore, we showed that estimates were unbiased when an interaction term was included in the model. Cox-MSM may be used for accurately estimating causal individual and joined treatment effects from a combination therapy in presence of time-dependent confounding provided that an interaction term is estimated. Combining multiple treatments is a common practice in the therapeutic strategy of chronic or infectious diseases in order to strengthen the effect of treatments or to limit the resistance of pathogens to therapies. When an adverse event occurs in a patient taking multiple treatments, the mainstay of therapy would be to discontinue the suspected inducing drug while maintaining others. In this case, precise identification of the causative treatment is essential. This topic is particularly relevant when performing a safety analysis of treatments in cohort studies. In such studies, the presence of time-dependent confounders (i.e. covariates that predict disease progression and treatment initiation) affected by past treatment might lead to biased estimates if conventional regression methods are used [1, 2]. Furthermore, estimating the individual effect of each treatment becomes methodologically challenging when given simultaneously and when treatment changes overtime. Marginal structural models (MSMs), a class of causal models, have been proposed as a solution to estimate the causal effect of a time-dependent treatment in the presence of time-dependent confounders [3, 4]. In this approach, the inverse probability of treatment weighted (IPTW) estimation method is used to consistently estimate MSM parameters [5]. In the context of multiple treatments, a seminal work introduced Cox-MSM for survival analysis and applied it to estimate the joint causal effect (efficacy) of two time-dependent nonrandomized treatments on survival among HIV-positive subjects [6]. IPTW estimation was used to compute stabilized weights related to multiple medication intakes and to balance the treatment groups at each month. The statistically significant beneficial effects observed were consistent with the results of previous randomized clinical trials. More recently, IPTW estimation was used for joint treatment effects of two treatments or the marginal effect of one treatment in a setting where two concurrent treatments are given at a point in time [7]. MSMs have been used to study the direct effect of several exposures on the outcome of interest by controlling interrelation over time between studied exposures and between exposures and time-dependent covariates. In this way, Tager et al. [8] simultaneously studied the effects of physical activity and body composition on functional limitation in the elderly by controlling the confusion induced by interrelation of these two variables over time and their relation to other covariates. To more clearly illustrate the differences in methods and their influence on bias, they carried out simulations comparing weighted and unweighted analysis with respect to the true parameters. However, they did not consider interaction in their simulation study. Howe et al. used joint Cox-MSM to estimate the joint effects of multiple time-varying exposures (alcohol consumption and injected drug use) on HIV acquisition [9]. Lopez-Gatell et al. used a joint Cox-MSM to estimate the effect of incident tuberculosis disease and Highly Active Antiretroviral therapy (HAART) initiation on AIDS-related mortality [10]. Cole et al. estimated the joint effects of HAART and PCP prophylaxis on time to AIDS or death using marginal structural models [11]. Bodnar et al. estimated the causal effect of 16 different combinations (regimes) of iron treatment throughout pregnancy on the odds of anemia at delivery [12]. Nevertheless, to date, Cox-MSM performance in the case of multiple treatments has not been fully explored under different degree of time-dependent confounding for treatments or in case of interaction between treatments. While other studies [8, 9, 11] have included interaction between treatments in their analyses, none has specifically focused on the bias generated when interaction is excluded from the estimated model. This latter issue is critical as numerous adverse events are caused by specific drug-drug interactions and would not occur if each drug was taken separately (e.g., interactions with cytochrome P450 3A4 inhibitors and statins). The goal of this paper is to evaluate the Cox-MSM performance for estimating the individual and joined effects of multiple treatments when they are given in combination through simulation studies. For the sake of simplicity we will limit our study to exploring the use of two treatments with a potential interaction between treatments. We will compare results from Cox-MSM with estimates obtained using a classic time-dependent Cox regression model and provide an application in the context of HIV infection to evaluate the specific effect of protease inhibitors combined (or not) to other antiretroviral medications on the risk of anal cancer in HIV-infected individuals, using CD4 cell count as time-dependent confounder. The paper is structured as follows: Section 2 describes the method used in this work; Section 3 provides the results of simulation studies that estimate the individual effects of two treatments on an adverse event; Section 4 presents an application of the method. We discuss our results in section 5 and finally, we conclude in section 6. Notation for the cox-MSM We considered a longitudinal study in which n subjects (labeled i = 1, …, n) entered a study at baseline, given multiple treatments and were followed at regular time intervals from enrollment into the cohort up to M visits or until the event of interest. Visits (labeled m = 0, 1, 2,…, M) were assumed to take place at the beginning of intervals in the form [m, m + 1]. At each interval, the value of the time-dependent confounder, the treatments and the event were observed. We used capital letters to represent random variables and lower-case letters to represent possible realizations (values) of random variables. We explored the model with one disease progression marker and considered the case where a subject might be given two treatments. A1i (m) and A2i (m) denote dichotomous variables indicating whether patient I received treatment A1 and/or A2 at visit m. Accordingly, there are four possible categories for treatment exposure: exposed to both treatments (A1i (m), A2i (m)) = (1, 1), exposed to only one treatment (A1i (m), A2i (m)) = (1, 0) or (A1i (m), A2i (m)) = (0, 1), not exposed (A1i (m), A2i (m)) = (0, 0). We denoted the baseline fixed covariates V = L (0), the time-dependent confounder by Li (m), the event (death or side effect) by Yi (m) and the associated failure time variable that may either be exactly observed or interval censored by Ti. We used overbar to represent a covariate history up to a visit, i.e. \( {\overline{A}}_i \) (m) = (Ai (0), Ai (1), … Ai (m)) and \( {\overline{L}}_i \) (m) = (Li (0), Li (1), … Li (m)) to indicate treatment and confounder history up to visit m. The cox-MSM with two treatments We specified the Cox-MSM when two treatments are given to a patient: $$ {\lambda}_{T_{\overline{a}}}\left(m|V\right)={\lambda}_0(m)\ \exp \left({\beta}_1{a}_1(m)+{\beta}_2{a}_2(m)+{\beta}_3{a}_1(m){a}_2(m)+{\beta}_4V\right) $$ Where T is the random variable representing a subject's survival time given the treatment history, \( {\lambda}_{T_{\overline{a}}}\left(m|V\right) \) is the hazard of T at visit m among subjects given pretreatment covariates V, λ 0(m) is the unspecified baseline hazard at visit m, exp.(β 1), exp.(β 2) and exp.(β 3) are the causal rate ratios for each treatment and their interaction, and exp.(β 4) is the rate ratio associated with the vector of baseline covariates. We performed simulations using a cohort of HIV-positive individuals receiving multiple antiretroviral treatments. We set CD4 count as the time-dependent confounding covariate Li and occurrence of an adverse event as outcome Yi. We simulated a data structure where the outcome at visit m depended on current treatment status only. We assumed that the only baseline covariate was the pre-treatment value of the confounder Li (0). This section presents data generation and structure. We assumed that: A1i (m), A2i (m) and Li (m) remained constant during the subsequent interval between visits m and (m + 1); treatment continued once initiated; and there was no loss to follow-up - thus we considered the case where censoring occurred only at the end of the follow-up. Figure 1 shows the causal directed acyclic graph corresponding to the structure of simulated data. We considered three different cases of time-dependent confounding. In case 1 and 2, the two treatments were predicted by the time-dependent confounder and affected the future value of the time-dependent confounder but the coefficient of the time-dependent confounder in the function of treatment prediction was set to different values for treatment A2 in case 1 (= strong confounding) and case 2 (= weak confounding). In case 3, treatment A2 was not predicted by the time-dependent confounder, which was not affected by that treatment. Causal directed acyclic graphs corresponding to the structure of simulated data. A1 and A2 are the treatments, L is the time-dependent confounder and Y is the outcome. Case 1 and 2 considered all relationship between A1, A2 and L. The time-dependent Confounder was strongly associated to the treatments A1 and A2 in the case 1 whereas it was weakly associated to the treatments A2 in the case 2. Coefficients of the time-dependent confounder in the functions of treatment prediction were set to 0.004 and 0.001, respectively. Case 3: relationship between A2 and L were not considered. Data were simulated from a marginal structural model as the confounding in the exposures-outcome relationship arises via T0 as follows: Y (m + 1) ← T0 → L (m) → A1 (m), Y (m + 1) ← T0 → L (m) → A2 (m) Several studies simulated data from Cox-MSM under different conditions [13,14,15,16,17,18,19,20]. In our study, we simulated data for two treatments, adapting the data generation processes for one treatment described in Young et al. and Vourli and Touloumi [15, 21]. For each simulated subject we generated: (a) counterfactual survival times, Ti 0 from an exponential distribution with parameter λ; (b) covariate values at baseline (time 0) as follows: Li (0) = b + c log Ti 0 + ei,0, where b ∼ N(μ b, σb 2); ei,m ∼ N(0, σe 2); c is the coefficient that gives the strength of association between the confounder time-dependent covariate and the counterfactual survival time; (c) treatments A1i (m) and A2i (m) from a distribution conditional on a function of past variables. For each treatment this function included Li (m), A1i (m-1), A2i (m-1) and the product of A1i (m-1) and A2i (m-1). For each m, we generated subsequent values of the covariate Li (m) as a linear function of past variables; (d) finally, we generated the actual survival time Ti of each individual. We considered five different sets of parameters (A, B, C, D, E) for the marginal true effects of treatments on the outcome resulting in a total of 15 sub-cases (numbered 1A to 3E). Furthermore, for the cases 1 and 2, coefficient of the time-dependent confounder in the function of treatment prediction (see Additional file 1) were set to 0.004 (α1 = 0.004, ω1 = 0.004) and 0.001 (α1 = 0.004, ω1 = 0.001), implying a strong and a weak confounding, respectively (Table 1). The coefficient c that gives the strength of association between the time-dependent confounder and the counterfactual survival time was set to 6 and μ b and σb were equal to 600 and 200, respectively. σc was equal to 3. Table 1 Parameters of marginal true effect for the 15 simulated sub-cases The event rate varied between 0.1% to 2%. To avoid separation of data, simulations with no event in at least one category of treatment exposure were discarded. For each set of parameters, we generated 1000 datasets with 1000 patients each. We assumed that each patient in this cohort was followed for 12 months and had monthly clinical follow-up visits. Estimation of the cox MSM with two treatments To fit the Cox-MSM and in order to keep the weight variability as low as possible, the stabilized weights (SW) for estimation via IPTW were used in all analyses [4]. Weights related to each treatment and final censoring were computed and multiplied to obtain a final set of weights as follows: $$ {\boldsymbol{SW}}_{\boldsymbol{i}}^{{\boldsymbol{A}}_1}\left(\boldsymbol{m}\right)=\prod \limits_{\boldsymbol{m}=1}^{\boldsymbol{M}}\frac{\boldsymbol{P}\left[{\boldsymbol{A}}_{1\boldsymbol{i}}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\boldsymbol{L}}_{\boldsymbol{i}}(0)\right]}{\boldsymbol{P}\left[{\boldsymbol{A}}_{1\boldsymbol{i}}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{L}}}_{\boldsymbol{i}}\left(\boldsymbol{m}\right)\right]} $$ $$ {\boldsymbol{SW}}_{\boldsymbol{i}}^{{\boldsymbol{A}}_2}\left(\boldsymbol{m}\right)=\prod \limits_{\boldsymbol{m}=1}^{\boldsymbol{M}}\frac{\boldsymbol{P}\left[{\boldsymbol{A}}_{2\boldsymbol{i}}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\ \boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\boldsymbol{L}}_{\boldsymbol{i}}(0)\right]}{\boldsymbol{P}\left[{\boldsymbol{A}}_{2\boldsymbol{i}}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{L}}}_{\boldsymbol{i}}\left(\boldsymbol{m}\right)\right]} $$ $$ {\boldsymbol{SW}}_{\boldsymbol{i}}^{\boldsymbol{C}}\left(\boldsymbol{m}\right)=\prod \limits_{\boldsymbol{m}=1}^{\boldsymbol{M}}\frac{\boldsymbol{P}\left[\boldsymbol{C}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{C}}}_{\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\boldsymbol{L}}_{\boldsymbol{i}}(0)\right]}{\boldsymbol{P}\left[\boldsymbol{C}\left(\boldsymbol{m}\right)|{\overline{\boldsymbol{C}}}_{\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{1\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{A}}}_{2\boldsymbol{i}}\left(\boldsymbol{m}-1\right),{\overline{\boldsymbol{L}}}_{\boldsymbol{i}}\left(\boldsymbol{m}\right)\right]} $$ $$ {\boldsymbol{SW}}_{\boldsymbol{i}}\left(\boldsymbol{m}\right)={\boldsymbol{SW}}_{\boldsymbol{i}}^{{\boldsymbol{A}}_1}\left(\boldsymbol{m}\right)\times {\boldsymbol{SW}}_{\boldsymbol{i}}^{{\boldsymbol{A}}_2}\left(\boldsymbol{m}\right)\times {\boldsymbol{SW}}_{\boldsymbol{i}}^{\boldsymbol{C}}\left(\boldsymbol{m}\right) $$ The numerator of the SW is the probability that a subject received observed treatment Ai at visit m conditional only on A1i, A2i history and baseline covariate. The denominator is the probability that a subject received observed treatment Ai at visit m given A1i, A2i history and time-dependent covariate. Once the weights were computed, we fitted a weighted Cox proportional hazard model to estimate parameters [9]. We used robust variance estimators to estimate standard errors [22]. We implemented this analysis by using the covs option in the time-dependent Phreg procedure in SAS [9]. For the situation where interaction was not set to zero, we also examined the model without including the interaction term. To assess the performance of the model, we computed the absolute bias defined as the difference between average simulated estimates and its corresponding true values and the coverage rate defined as the percentage of confidence intervals that included the true value. Figures 2, 3 and 4 show the bias and the 95% coverage rate of unweighted and weighted treatment effect estimates as the number of events increases for the different cases. Values of mean bias (MB), standard deviations of estimates, root mean squared error (RMSE) and mean coverage rate (MCR) for all cases are presented in the supplementary material. Bias and coverage rate of treatment effects estimates for the sub-cases 1A, 1B, 1C, 1D and 1E As shown in Figs. 2, 3 and 4 (see Additional file 2: Table S1 for mean values), weighted analysis yielded the most accurate estimates of the treatment effects. Indeed, weighted analysis yielded unbiased estimates for the treatment effects A1, A2 and interaction between treatments in all cases. In contrast, estimates of unweighted analysis were clearly biased in case 1 (for treatment effect A1 and A2), case 2 and case 3 (for interaction between treatments). Estimates of unweighted analysis were less biased for interaction between treatment (in case1) and treatment effects A1, A2 (in case 2 and case 3). The values of standard deviations were different from RMSE in all cases of unweighted analysis while the weighted analysis produced values of standard deviation identical to that of the RMSE. Furthermore, for estimates of the unweighted analysis, we observed a slight decrease of bias value as the number of events increased in case1 (for treatment effects A1 and A2 and interaction between treatments), case 2 and 3 (only for interaction between treatments). For sub-cases 1B and 1E, weighted estimates obtained when interaction was not included in the model were biased compared to those obtained when the interaction was included in the model (Fig. 5). Bias of treatment effects estimates for the sub-cases 1B and 1D according to whether interaction was estimated in the model Application to exploring the risk of anal cancer associated with exposure to protease inhibitor in HIV-1 infected persons from the FHDH-ANRS CO4 cohort Recently, Bruyand et al. [23] found a possible association of cumulative PI (protease inhibitor) exposure with a higher risk of anal cancer in HIV-1 infected persons. However, these primary analyses did not adjust for CD4 count at treatment initiation and duration of CD4 count <200 cells/μl, known to be associated with the likelihood of receiving PI and the risk of anal cancer [24]. We applied the Cox-MSM framework to evaluate the individual and joined effects of PIs given in combination with other antiretroviral treatments (ARVs), on the risk of anal cancer in HIV1-infected persons. Data were obtained from the FHDH cohort (French Hospital Database on HIV-ANRS CO4), a nationwide hospital cohort initiated in 1989 for individuals infected with HIV [25]. We selected all HIV 1-infected treatment naïve individuals at enrollment until 2008. Demographic, clinical, laboratory, ARV information, and cancer events were collected at enrollment and at follow-up visits as reported elsewhere [26]. For illustration purposes, all ARVs other than PIs were grouped in a single category irrespective of drug class. Baseline covariates were age, gender, transmission group, origin (sub-Saharan vs other), AIDS diagnosis at baseline, CD4 cell count and HIV RNA. Time-dependent covariates were AIDS diagnosis, CD4 cell count, HIV RNA. The time-dependent confounder was the CD4 cell count. The follow-up was split into one-month periods. Treatment and all time-dependent covariates were assumed to remain constant within each period. Time zero was the enrollment date in FHDH. Patients were followed until the occurrence of anal cancer, death or the end of follow-up, whichever occurred first. A total of 72,355 patients (531,823 person-years) were followed. The median age of the study population was 34 years at enrollment in FHDH. Study subjects were 67% male and 79% from Sub-Saharan origin. Median CD4 cell count and HIV RNA at baseline were 360 cells/μL and 10,095 copies/mL, respectively. The cohort experienced 9972 person-years (PY) of PIs only, 237,323 PY of other ARV and 130,428 PY of PIs and other ARVs, given simultaneously. During the follow-up, a total of 130 patients (24/100,000 PY) developed anal cancer. The rate of anal cancer was 90/100,000 PY for patients who received PIs only, 27/100,000 PY for those who received other ARVs, 33/100,000 PY for those who received PIs and other ARVs and 9.6/100,000 PY for untreated patients. To determine whether current CD4 count predicted treatment with PIs and other ARVs, we fitted pooled logistic models for treatment initiation with PIs and other ARVs that included the baseline covariates and the time dependent covariates. CD4 cell count predicted treatment with PIs (Odds-ratios (OR) = 2.77 (p < .0001) for low (<200 cells/μL) versus high CD4 cell count (> 500 cells/μL) and OR = 1.39 (p < .0001) for moderate (200–500 cells/μL) versus high CD4 cell count). CD4 cell count also predicted treatment with other ARVs (OR = 5.63 (p < .0001) for low versus high, and 2.00 (p < .0001) for moderate versus high CD4 cell count, respectively). To determine whether the treatments had an impact on the CD4 count, we fitted a linear model for the mean CD4 count (in cells/μl) in the current month given the baseline covariates, PIs and other ARVs in the previous month, and the remaining time-dependent covariates in the previous month [6]. As expected, we found that PIs and other ARVs have an impact on the CD4 count, with coefficient estimates by the linear model of 1.44 (p < .0001) and 0.89 (p < .0001), respectively. This exploratory analysis confirmed that CD4 was a potential time-dependent confounder affected by past treatment exposure as described in case 1. Stabilized weights, related to each treatment class (PIs, other ARVs) and censoring, were then constructed using Eqs. (2), (3) and (4). They were estimated using logistic regression models with baseline covariates for the SW numerator and baseline and time-dependent covariates for the SW denominator. To reduce the impact of extremely high weights, we truncated the weights at the 1st and the 99th percentiles of their distribution across all person-months of follow-up [27]. The SW had a mean of 1.10 and a standard error of 0.37 after truncation at the 1st and the 99th percentiles (Fig. 6). Distribution of stabilized weights related to PIs and other ARVs For the Cox-MSM, in addition to treatment variables including interaction, we adjusted for baseline covariates – this weighted model should be considered as the reference model. We also fitted a weighted model without the interaction term. For the standard time dependent Cox model, we adjusted for all baseline covariates and time-dependent covariates and interaction was estimated. The product term between PIs and other ARVs would represent interaction in these models only in the absence of bias due to confounding or selection. Conversely, the product term would represent effect measure modification if bias due to confounding was present for only 1 of the 2 treatments [9, 28]. Table 2 presents estimates of hazard ratios for treatment variables. In the reference model, the risk of anal cancer was significantly increased in patients with isolated PI therapy. Based on the weighted model without interaction, none of PIs, other ARVs nor the combination increased the risk of anal cancer. Conversely, all treatment variables appeared to be associated with the risk of anal cancer in the time-dependent Cox model – leading to potentially spurious conclusions. Other variables associated with increased risk of anal cancer in the reference model were longer cumulative duration with CD4 count <200 cells/μl and being a MSM (Men who have sex with Men vs Women) – (results not shown – see Additional file 3: Table S2). Our findings suggest that an increased risk of anal cancer, if any, may exist in the specific category of patients taking PI monotherapy. Table 2 Comparison of estimates of HR for ARV obtained by Cox-MSM and standard time-dependent Cox models Through simulation study, we explored the performance of the Cox MSM for estimating the individual effects of two treatments given simultaneously. The simulations showed that using a joint Cox-MSM in the presence of a time varying confounder yielded unbiased estimates while standard time-dependent Cox model yielded biased estimates. Furthermore, we showed the importance of estimating the interaction term when exploring treatment effects from combination therapy. The strength of our simulation study is twofold: first, we generated data that is suitable for analysis by a Cox-MSM and secondly, we applied a data generation process to simulate data for two treatments, while Vourli and Touloumi [15] and Young et al. [15, 21] performed simulations for only one treatment. Furthermore, we generated a data structure where both combined treatments depend on each other by including an interaction term between both treatments in the treatment predictive model. We also considered a realistic situation when a specific adverse event might be caused by two treatments taken simultaneously but not by one treatment taken alone. Our simulation study has several limitations. First, we considered that the hazard depends only on the current treatments status. However, treatment effects may cumulate over time and depend on the time since exposure [29]. This requires an assessment as to whether the treatment effects cumulate over time when estimating the individual and joined effects of treatments given in combination [18]. Furthermore, with only one time-dependent confounder, our simulated setting could be considered unrealistic and too simplistic. Further studies are needed to consider more complex simulated settings with multiple time-dependent confounders and complex hazard functions (cumulative treatment). A number of studies have proposed various algorithms of simulating data suitable for fitting Cox-MSMs [14, 17, 30] and could be useful in this context. Second, we explored situations where only two treatments or two classes of treatment were administered; however in real life a patient could receive more co-medications. Applying this framework to a real situation with more than two treatments could make calculations of stabilized weights more complex as one has to consider multiple and complex interactions between all treatments. Third, our simulations suggested that our results and conclusions are robust with respect to the number of simulated events, and treatment or confounder effects on the hazards. Future simulations should investigate wider ranges of these parameters as well as the potential impact of the sample size, impact of missing values or unmeasured confounder on the results. Fourth, our result confirmed the superiority of the Cox-MSM on the standard time-dependent Cox model. Other methods could be explored to estimate the individual effect of treatments when given in combination. For example: doubly robust estimation [31, 32], combines inverse probability weighting with regression modeling of the relationship between covariates and the outcome for each treatment in such a way that, as long as either the propensity score or the regression models are correctly specified, the effect of exposure on the outcome will be correctly estimated. Other methods (e.g., targeted maximum likelihood estimation [33, 34], g-computation, g-estimation of structural nested models etc.. [35]) are also potential alternatives. In addition, other choices for estimating weights in multiple treatment settings could be used, e.g. multinomial logistic regression or machine learning methods [36]. Taken together, future studies would be needed to compare our results with these alternative methods. Fifth, comparing the Cox-MSM parameter estimate to any conditional treatment effect estimate is not straightforward when non-collapsible measures, such as hazard ratio or odds ratio, are employed [17, 37,38,39]. We did not perform numerical experiments to explore how the marginal and conditional estimates could differ, which is a limitation of our study. However, the difference between the conditional and marginal parameters is expected to be negligible as the event rate in the time intervals under consideration was small [17, 38]. Exchangeability, positivity and correct model specification are three conditions for unbiased estimation of Cox-MSM [6, 9, 27, 28]. For the exchangeability, we assumed that the selected covariates are sufficient to adjust for both confounding and selection bias. The limitation is that this is not testable in an observational study. In our study, we did not observe departures from positivity assumption after truncation of weights as the latter is based on a lack of extreme weights [27]. The lack of extreme weights obtained after truncation provides some evidence against model misspecification. Using the weighted model with interaction, we found a significant association between use of PIs alone and the risk of anal cancer in HIV infected persons. The HR estimates were markedly different from those obtained with the weighted model without interaction – a finding due to a significant interaction between PI and other ARVs (β3 = −1.43 in the weighted model). Compared with the time dependent Cox model, HR estimate for PIs alone was higher in the weighted model with interaction while HR for other treatment variables were lower leading to different conclusions based on statistical testing – however HR from these two models were in the same range of values. This indicates that time-dependent confounding might be weak for all treatment variables and that the time dependent Cox dependent estimates are only slightly biased. In previous studies, Bruyand et al. and Chao et al. found an association between PIs use and anal cancer risk [23, 40]. In both cases, multivariable Poisson models were used. The first study did not adjust for CD4 count at initiation and/or cumulative duration of CD4 count <200 cells/μl and the second one adjusted for CD4 count as time-dependent covariate but none dealt with complex time-dependent confounding. In our application, we used the model (weighted model with interaction) that performed more accurately in our simulations. Nevertheless, the limitation of our application is that we did not take into account the cumulative duration of ARV exposure. This requires further analysis with cumulative duration of ARV as exposure. In summary, we evaluated the joint Cox-MSM for estimating the individual and joined effects of treatments given in combination in observational studies. The Cox-MSM performed accurately in a simulation study under all scenarios. Furthermore, the Cox-MSM did not perform accurately when an interaction term was not considered in the model. The application of the framework (weighted model with interaction) on real longitudinal data confirmed the results obtained in the simulation study and has shown the utility of the Joint Cox-MSM for estimating the individual and joined causal effects of treatments when they are given in combination in observational studies. Acquired immune deficiency syndrome ANRS: Agence Nationale de Recherche sur le Sida et les hépatites virales ARVs: Antiretroviral treatments COX-MSM: Marginal structural Cox model FHDH: French Hospital Database on HIV HIV: Human immunodeficiency virus infection IPTW: Inverse Probability of treatment weighted Mean bias MCR: Mean coverage rate MSM: MSMs: Marginal structural models PI: Protease inhibitor PY: RMSE: RNA: Ribonucleic acid SW: Stabilized weights Hernan MA, Brumback B, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15:615–25. Robins JM. A new approach to causal inference in mortality studies. Mathematical Modelling. 1986;7:1393–512. Robins JM. Association, causation, and marginal structural models. Synthese. 1999a;121(1):151–79. Robins JM, Hernan MA. Estimation of the causal effects of time-varying exposures. Longitudinal Data analysis. 2008:553–99. Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000;11(5):550–60. Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the joint causal effect of nonrandomized treatments. J Am Stat Asso. 2001;96:440–8. Ellis AR, Brookhart MA. Approaches to inverse-probability-of-treatment--weighted estimation with concurrent treatments. J Clin Epidemiol. 2013;66(8 Suppl):S51–6. Tager IB, et al. Effects of physical activity and body composition on functional limitation in the elderly: application of the marginal structural model. Epidemiology. 2004;15(4):479–93. Howe CJ, et al. Estimating the effects of multiple time-varying exposures using joint marginal structural models: alcohol consumption, injection drug use, and HIV acquisition. Epidemiology. 2012;23(4):574–82. Lopez-Gatell H, et al. Effect of tuberculosis on the survival of women infected with human immunodeficiency virus. Am J Epidemiol. 2007;165(10):1134–42. Cole SR, et al. Effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome or death using marginal structural models. Am J Epidemiol. 2003;158(7):687–94. Bodnar LM, et al. Marginal structural models for analyzing causal effects of time-dependent treatments: an application in perinatal epidemiology. Am J Epidemiol. 2004;159(10):926–34. Havercroft WG, Didelez V. Simulating from marginal structural models with time-dependent confounding. Stat Med. 2012;31(30):4190–206. Karim ME, et al. On the application of statistical learning approaches to construct inverse probability weights in marginal structural cox models: hedging against weight-model misspecification. Commun Stat Simul Comput. 2016:1–30. Vourli G, Touloumi G. Performance of the marginal structural models under various scenarios of incomplete marker's values: a simulation study. Biom J. 2015;57(2):254–70. Westreich D, et al. A simulation study of finite-sample properties of marginal structural cox proportional hazards models. Stat Med. 2012;31(19):2098–109. Xiao Y, Abrahamowicz M, Moodie EE. Accuracy of conventional and marginal structural cox model estimators: a simulation study. Int J Biostat. 2010;6(2):Article 13. Xiao Y, et al. Flexible marginal structural models for estimating the cumulative effect of a time-dependent treatment on the hazard: reassessing the cardiovascular risks of Didanosine treatment in the Swiss HIV cohort study. J Am Stat Asso. 2014;109(506):455–64. Young JG, et al. Relation between three classes of structural models for the effect of a time-varying exposure on survival. Lifetime Data Anal. 2010;16(1):71–84. Young JG, Tchetgen Tchetgen EJ. Simulation from a known cox MSM using standard parametric models for the g-formula. Stat Med. 2014;33(6):1001–14. Young, J., S. Picciotto, and J.M. Robins, Simulation from structural survival models under complex time-varying data structures. J Am stat asso, 2008. Ali RA, Ali MA, Wei Z. On computing standard errors for marginal structural cox models. Lifetime Data Anal. 2014;20(1):106–31. Bruyand M, et al. Cancer risk and use of protease inhibitor or nonnucleoside reverse transcriptase inhibitor-based combination antiretroviral therapy: the D: a: D study. J Acquir Immune Defic Syndr. 2015;68(5):568–77. Guiguet M, et al. Effect of immunodeficiency, HIV viral load, and antiretroviral therapy on the risk of individual malignancies (FHDH-ANRS CO4): a prospective cohort study. Lancet Oncol. 2009;10(12):1152–9. Mary-Krause M, et al. Cohort profile: French hospital database on HIV (FHDH-ANRS CO4). Int J Epidemiol. 2014;43(5):1425–36. Piketty C, et al. Incidence of HIV-related anal cancer remains increased despite long-term combined antiretroviral treatment: results from the french hospital database on HIV. J Clin Oncol. 2012;30(35):4360–6. Cole SR, Hernan MA. Constructing inverse probability weights for marginal structural models. Am J Epidemiol. 2008;168(6):656–64. VanderWeele TJ. On the distinction between interaction and effect modification. Epidemiology. 2009;20(6):863–71. Csajka C, Verotta D. Pharmacokinetic-pharmacodynamic modelling: history and perspectives. J Pharmacokinet Pharmacodyn. 2006;33(3):227–79. Karim ME, Platt RW. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural cox model context. Stat Med. 2017;36(13):2032–47. Bang H, Robins JM. Doubly robust estimation in missing data and causal inference models. Biometrics. 2005;61(4):962–73. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some Regressors are not always observed. J Am Stat Assoc. 1994;89(427):846–66. van Der Laan M. Targeted maximum likelihood based causal inference: part 1. Int J Biostat. 2010;6(2):2. Daniel RM, et al. Methods for dealing with time-dependent confounding. Stat Med. 2013;32(9):1584–618. McCaffrey DF, et al. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Stat Med. 2013;32(19):3388–414. Austin PC. The performance of different propensity score methods for estimating marginal hazard ratios. Stat Med. 2013;32(16):2837–49. Karim ME, et al. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies. Stat Methods Med Res. 2016; Pang M, Kaufman JS, Platt RW. Studying noncollapsibility of the odds ratio with marginal structural and logistic regression models. Stat Methods Med Res. 2016;25(5):1925–37. Chao C, et al. Exposure to antiretroviral therapy and risk of cancer in HIV-infected persons. AIDS. 2012;26(17):2223–31. This work was supported by Agence Nationale de Recherche sur le Sida et les hépatites virales (ANRS). The funding body did not play any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. The data used and analyzed during this study are not publicly available due to confidentiality reasons. Sorbonne Universités, INSERM, UPMC Université Paris 06, Institut Pierre Louis d'épidémiologie et de Santé Publique (IPLESP UMRS 1136), Paris, France Clovis Lusivika-Nzinga , Hana Selinger-Leneman , Sophie Grabar , Dominique Costagliola & Fabrice Carrat Unité de Biostatistique et d'épidémiologie Groupe hospitalier Cochin Broca Hôtel-Dieu, Assistance Publique Hôpitaux de Paris (AP-HP), and Université Paris Descartes, Sorbonne Paris Cité, Paris, France Sophie Grabar Unité de Santé Publique, Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Paris, France Fabrice Carrat Search for Clovis Lusivika-Nzinga in: Search for Hana Selinger-Leneman in: Search for Sophie Grabar in: Search for Dominique Costagliola in: Search for Fabrice Carrat in: CLN conceived the idea, performed simulations, made analysis and interpretation of results and wrote the draft of the manuscript. HSL made substantial contribution to acquisition of data and was involved in revising critically the manuscript. SG and DC made substantial contribution to analysis and interpretation of results and were involved in revising the manuscript critically. FC conceived the idea and made substantial contribution to analysis and interpretation of results and was involved in writing and revising the manuscript critically. All authors read and gave final approval of the version to be published and agreed to be accountable for all aspects of the work. Correspondence to Fabrice Carrat. Complete data generation. (DOCX 28 kb) Mean bias, standard deviation, mean squared error and mean coverage rate of estimates. (DOCX 75 kb) Additional file 3: Tables S2. Multivariate parameter estimates for covariate association with the risk of anal cancer in HIV-infected persons: comparison of weighted Cox MSM and standard time dependent Cox models. (DOCX 22 kb) Lusivika-Nzinga, C., Selinger-Leneman, H., Grabar, S. et al. Performance of the marginal structural cox model for estimating individual and joined effects of treatments given in combination. BMC Med Res Methodol 17, 160 (2017) doi:10.1186/s12874-017-0434-1 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-017-0434-1 Causal inference Time-dependent confounding Multitherapy
CommonCrawl
Research | Open | Published: 06 March 2015 Performance analysis of E-shaped dual band antenna for wireless hand-held devices Balamurugan Rajagopal1 & Lalithambika Rajasekaran2 Human-centric Computing and Information Sciencesvolume 5, Article number: 6 (2015) | Download Citation Due to evolution in wireless applications, the high performance dual band handsets were blooming in the market. In this paper, a compact dual band E shaped planar inverted F antenna is presented, which is suitable for GSM application in handheld devices. Here, antenna is described for GSM (900 MHz and 1800 MHz), which covers (831 MHz – 973 MHz and 1700 MHz – 1918 MHz) 10 dB bandwidth. The designs and simulations are performed using Finite Difference Time Domain (FDTD) technique based General Electro Magnetic Simulator – Version 7.9 (GEMS-7.9). The performance analysis of E-shaped antenna also includes real world interaction between antenna element and Spherical human head model composed of three layers, skin, skull and brain. The simulated results including, S-Parameter, radiation pattern, current distributions and Specific absorption rate, thermal distributions have validated the proposed E shaped antenna design as useful for compact mobile phone devices with comparatively low average Specific Absorption Rate in market. Over last decade, the evolution of wireless communication devices has increased rapidly to fulfill the requirement of high performance mobile portable devices which includes smart phones, Tablets, Notebooks etc. The handset antenna which, plays a transceiver role in mobile phone handset, should be optimized for better performance. In addition to the electrical requirements, the design of a handset antenna has to take into account the resulting exposure of the user. However, there has also been increase in concern regarding ill effects of Radio Frequency (RF) emitted by mobile phone antennas. These adverse health effects can be assessed by measuring power coupled to human tissue and thermal change, by using dosimetry called Specific Absorption Rate. The international commission on non-ionizing radiation protection and IEEE provides radiation level limit for the consumer products in free space. Now a days variety of multiband internal antennas are reported, which are highly preferred for slim mobile phone due to their compactness [2,7]. The following literature survey shows the implication of dual band antenna in mobile phone communications. Dual band antenna (MIMO) can be used for LTE band (0.746 – 0.787 GHz) and the M-WiMAX (2.5 – 2.69 GHz). It consists of two identical elements, each of which is 15 × 13.25 mm2. The minimum separation between two elements is 0.5 mm [14]. Novel coplanar waveguide fed planar monopole antenna with dual-band operation for Wi-Fi and 4G LTE. It's operating bands consists of 2.3 – 3.0 GHZ, 4.7 to 5.9 GHz are achieved by carefully optimizing the position and size of a smiling slot. Antenna is characterized in terms of return loss, radiation pattern, and measurement in anaerobic champers [16]. A connected E-shaped and U-shaped dual band patch antenna for operating frequencies 2.46 GHz and 4.9 GHz is designed and the bandwidth variation is analyzed by changing the height of substrate, bridge width etc. for different wireless LAN applications. The simulation studies are performed using GEMS simulation software [21]. A compact planar inverted E-shaped dual band antenna is designed over PCB board of 10 × 5 × 4 mm3 and good performance characteristics observed at 2.4 GHz and 5.5 GHz makes this antenna suitable for mobile device applications [22]. In many commercial wireless applications, PIFA and PMA are extensively used because it is simple, compact with good radiation pattern with sufficient Bandwidth. Normally, the electrical characteristics of handset antenna mainly depend on the ground plane on which the antenna is fabricated and also on the phone casing. The bandwidth of the antenna element increases, if the casing also resonates at operating frequency. Bandwidth and radiation characteristics make the 2G dual band antenna suitable to be used for Wi-Fi and 4G LTE applications in the 2.4 GHz to 2.7 GHz band and also 5.1 GHz to 5.875 GHz band [13]. Currently, GSM (Global System for Mobile Communication) is a standard protocol for digital mobile communication used for phone calls and transmission of text messages, which is addressed in this paper [3]. In this paper, E shaped PIFA with dual band 900/1800 MHz has been introduced [2]. The design considerations and simulated results for the Proposed E shaped antenna such as, return loss, radiation pattern and current distributions were also analyzed. Further, the performance analysis of E shaped antenna is described by considering the real world environment in which, mobile phone is expected to operate. The near field environment are created with mobile phone model which includes antenna element, battery, exterior plastic shell and three layered human head model. Simulation and performance analysis of proposed E shaped antenna are performed using FDTD based GEMS simulator [11]. Section Numerical modelling includes the modeling technique and the modeling of antenna and near field interactive devices. Section Performance analysis of antenna in free space involves parametric analysis of E shaped antenna and current distributions in free space. Section Influence of near field on antenna performance discusses the influence of near field environment when antenna is in close proximity to a human head model. Finally, section Conclusion provides conclusion. Maxwell's equations can be solved in the time whereas for frequency domain many EM simulation techniques available using FDTD [17,18]. If the problem size grows, FDTD approach provides excellent scaling and the Broadband output can be obtaining using time domain approach. FDTD leads other computational methods say Finite Element method, Method of Moments etc. when the number of size of computational space increases. For studying, biological effects of Electromagnetic radiation from Wireless devices FDTD is better, which is the technique employed in our work. Further, FDTD also provides accurate results of the filed penetration into biological tissues. Numerical formulation using FD-TD technique In this work, finite difference time domain technique is used throughout the work, which can be formulated using Maxwell's curl equations, $$ \begin{array}{l}\nabla \times \mathrm{E}\left(\mathrm{r},\mathrm{t}\right)=-\partial \mathrm{B}\left(\mathrm{r},\mathrm{t}\right)/\partial \mathrm{t}\\ {}\end{array} $$ $$ \nabla \times \mathrm{H}\left(\mathrm{r},\mathrm{t}\right)=\mathrm{J}\left(\mathrm{r},\mathrm{t}\right)+\partial \mathrm{D}\left(\mathrm{r},\mathrm{t}\right)/\partial \mathrm{t} $$ Where, the equation involves electric field strength (E) and electric flux density (D), magnetic field strength (H) and magnetic flux density (B), electric current density (J) and electric charge density (ρe). Current density produces magnetic field around it. From the curl equation, we observed that the time derivative of the E-field depends on the change of the H-field across space. Hence, the value of the E-field can be computed if we know its previous value and the space-derivative of the H-field, which in turn is time-stepped and if initial field value, initial conditions and boundary conditions are known [4]. The FDTD technique divides the computational space into a Cartesian coordinate's grid of voxels and then allocates the components of the electric and magnetic fields as every E field is surrounded by H field and vice versa. This scheme is known as Yee lattice. If, the current changes over time, alternating magnetic field causes alternating electric field, which in turn causes another magnetic field, results in the creation of propagating electromagnetic wave of higher frequency. There are certain commercially available EM simulators (say SEMCADx, GEMS etc.) which employ FDTD technique for computation. The computational performance of SEMCADx is as follows (Min grid size (mm) is 300, Computational domain is 14.2 M cells; Simulation time is <15 min, Simulation Speed is 300 M cells/s [19,20]. A FDTD based electromagnetic simulator (GEMS version −7.9) is used throughout the work. The FDTD modeling including head and hand model consists of 739675 cells. The convergence of the simulated solutions has been checked for every 100 time steps and the solutions are set to be converged for S- parameter calculations. E - shaped antenna design The configuration of proposed E shaped antenna is shown in Figure 1. In general, traditional PIFA [1,4] is composed of metal strip, feeding line and shorting structure. The Antenna element has rectangular ground plane (52 mm × 32 mm). The radiating element composed of feed line (52 mm × 12 mm), patch S1 (20 mm × 8 mm), patch S2 (12 mm × 20 mm), patch S3 (12mm × 20 mm). There is free space (height = 1.8 mm) between the antenna top plate and the substrate. Structure of proposed compact dual band E shaped antenna. The substrate material used is of thickness t = 2 mm. The dimension of the shorting plate (S4) is 10 mm × 1.8 mm. The distance between the feeding and the shorting plate is 27 mm [6]. The radiating E element is modeled as perfect electric conductor. The excitation port is modeled as lumped port with internal resistance being 50 Ω. Maximum working frequency of 3 GHz is allowed for performance analysis of radiating antenna. Handheld device model and user head model In order to meet the expected handset performance in this work, we not only contend with designing antennas but also in need mitigating the RF interaction with the near field environment, which influences the E-shaped antenna performance. Figure 2 shows the hand held device model which is in close proximity to spherical human head model. Mobile handset interactions with layered spherical head model. The device composed of E shaped antenna, battery (20 mm × 25 m × 2.5 mm) and plastic cover (80 mm × 45 mm × 5 mm) which, encloses all the components. The dielectric constant used for the plastic cover is 4.4. The antenna and battery were modeled as metal materials [10]. The spherical head model consisting of three layers skin, skull and brain (as shown in Table 1) [8] is selected for the simulation study. The conductivity, permittivity and density of tissues which is a function of frequency (as shown in Table: 1) are an important factor in power coupled to human tissues [12]. The phone model is placed at three different distances from the side of the head and simulated results were compared for analysis. Table 1 Properties of human tissues Performance analysis of antenna in free space The design objective is a dual band portable handheld device antenna suitable for 900/1800 MHz GSM application. We optimize the design through simulation using General Electro – Magnetic Simulator (GEMS), a commercial software package based on Finite Difference Time Domain(FDTD) technique [10]. Current distributions While using the handset, the pulsed current flows from the battery to radiating element. This excitation gives rise to magnetic field around the handset. Figure 3 shows the current distributions in ground plane at 900 MHz and 1800 MHz respectively. The excitation of feeding port at right end of the E - shaped Antenna shows high magnitude surface current at the proximity of feeding point and becomes almost zero near the open end. This coupled current also affects the antenna performance by inducing heat around the handset device. (a) & (b): current distribution in ground plane for 0.9 GHz, 1.8 GHz respectively. Surface current in the ground plane is higher near feeding point. Figure 4 shows the current distributions in radiating E-shaped radiating element at 900 MHz and 1800 MHz respectively. The excitation of feeding port induces high magnitude surface current in proximity of feed but weak or null current in the area far from the feed [5]. Further, the weak surface current on the ground plane ensures the better antenna performance by reducing the specific absorption rate (SAR), where power coupled to human tissues when antenna is in proximity to users head. Since, the mobile handset is usually held close to the human body during the operation, it is necessary to analyze the current distribution as a function of distance (in section Influence of near field on antenna performance). (a) & (b): current distribution in E shaped antenna element at 900 MHz, 1800 MHz respectively. Lower surface currents other than feeding point ultimately reduces power coupled. S- parameter The simulated S parameter and smith chart representation for the dual band E shaped antenna is shown in Figure 5. Simulations were carried out using GEMS, a FDTD based simulator [7]. The results indicate the return loss better than 25 dB which can be seen above. It is observed that the 10 dB bandwidth covers 831 MHz – 973 MHz and 1700 MHz – 1918 MHz. This satisfies the required bandwidth for GSM 850/900/1800 MHz when compared to other proposed antennas as in [2,7]. Return loss of dual band E shaped antenna. E- Shaped antenna showing two resonant modes at 0.9 GHz and 1.8 GHz operating frequency. Bandwidth is one of the very important characteristics which make the 2G dual band antenna suitable to be used for 4G LTE applications. For example, the 10 dB bandwidth of proposed antenna covers LTE band-19 of NTT Docomo (Japan) which has uplink of (830–845) MHz and downlink of (875–890) MHz and LTE band-3 of NTT Docomo (Japan) has uplink of (1764–1784) MHz and downlink of (1859–1879) MHz. Similarly, FAReastone (Taiwan) covers LTE band-3 with uplink (1735–1755) MHz and downlink of (1830–1850) MHz. Hence, the proposed antenna can also be employed for 4G LTE applications [15]. 3D- radiation pattern Figures 6 and 7, represents the simulated 3D gain radiation pattern and polar plot of dual band E-shaped antenna gain at operating frequencies 900 MHz and 1800 MHz. Figure 6a; Figure 7a shows that, the radiation pattern were symmetrical about broad side direction Radiation pattern at 900 MHz. (a): Gain radiation pattern of antenna, (b): Altered Gain radiation pattern due to human head interaction, (c): Polar plot of Gain pattern. Radiation pattern at 1800 MHz. (a): Gain radiation pattern of antenna, (b): Altered Gain radiation pattern due to human head interaction, (c): Polar plot of Gain pattern. The antenna radiates possibly in all direction to cover the range. However, it radiates more in positive Z - direction, since reflected by the ground plane. The user head in Z direction acts as obstacle and absorbs certain amount of radiated power in Z direction thereby decreasing the efficient performance of E- antenna. Figures 6(b) and 7(b) show the altered radiation pattern due to human head interaction which absorbs certain amount of power radiated by phone, there by impacting mobile phone E-antenna performance [9]. Influence of near field on antenna performance Specific Absorption Rate is the subject of strict regulation for health protection. This section focuses to describe the impact of human head model interaction with mobile phone handset [8]. Specific absorption rate SAR is the rate at which the RF energies are absorbed by a given mass of material, as evidenced by a rise in material temperature. The SAR distribution on head model is calculated by assessing the E field coupled density (ρ) of the brain tissue layers and its conductivity (σ). $$ \mathrm{S}\mathrm{A}\mathrm{R}=\left(\upsigma {\left|\mathrm{E}\right|}^{\mathbf{2}}\right)/\left(\uprho \right) $$ SAR is averaged over tissue masses of 1 or 10 g tissue [5]. The human body which is a good conductor acts like a receiving antenna, absorbs the EM energy from the space. The tissues which are composed of different salts and organic compounds owns its permittivity and conductivity which are also function of frequency, impacts the power coupled to tissues. The internal coupled fields can be calculated using numerical method based computational technique (FDTD), which gives information regarding realistic RF exposure. SAR analysis and discussions The SAR values were expressed in terms of Watts per Kilogram over 1 g and 10 g of the head tissues. Normally the distance between the head and the handset is around 10 mm, during operation. Here, the power coupled to the head tissue is noted for three different distances i.e., d = 0 mm (handset pressed to user head), d = 5 mm and d = 10 mm. Table 2 give the SAR value for Spherical head model of user in free space for 900 MHz/1800 MHz. Table 2 SAR averaged over 1 g and 10 g tissue when exposed to handheld device The results indicate that, power coupled to the human tissue gets decayed with increase in distance from handset. SAR values are well below the SAR limit which substantiates the suitability of antenna design for wireless handheld device application, when handset is placed at 10 mm from the head, which is a normal placing position of phone during operation. From the Figures 8 and 9, it is observed that more current is distributed in the side of the head and get fluctuates towards other side and is almost null current recorded on the other side of head. Similar case is observed in 3D- SAR distribution. The red colour (hot spot) in the figure indicates higher value of power coupled, where mobile handset is placed nearer. In general, the SAR in the head tissue decreases as the distance from the head to the handset increases. SAR, current distribution at 0.9 GHz. a) At d=0 mm, b) at d=5 mm, c) d=10 mm. Thermal changes Thermal effects are mainly due to RF power absorbed by human tissues. Figures 10 and 11 show the 3D- thermal distributions in human brain tissue at 900 MHz/1800 MHz operating frequency. Heat induced in the tissue might affect the proper functioning of cells or affect the cell metabolism [12]. However, the constant blood flow will maintain body temperature in equilibrium state. From the Figure 10 and Figure 11, it can be seen that, the brain side where a cell phone is used receives significantly higher dose of radiation when compared to other side. The variation in the thermal distribution in different human tissues is due to their conductivity and permittivity. Thermal distribution at 0.9 GHz [ a) at d = 0 mm, b) at d = 5 mm, c) at d = 10 mm]. Thermal distributions at 1.8 GHz [ a) at d = 0 mm, b) at d = 5 mm, c) at d = 10 mm]. Figure 12a shows the graphical representations of variation in coupled power due to different distance of handset interaction for 0.9GHz frequency. It may be noted that, both the 1g SAR and 10 g SAR are higher for the mobile phone antenna placed very nearer to head model and SAR value eventually decreases when the distance between antenna and head increases. Graphical representation indicates higher SAR which is noted when interaction distance between mobile phone antenna and head decreases. a: Graphical representation of 1 g and 10 g SAR at 0.9 GHz. b: Graphical representation of 1 g and 10 g SAR at 1.8 GHz. From Figure 12b, it is observed that, for 1800 MHz frequency, the 1 g SAR and 10d SAR are more than three times higher than SAR values observed at 900 MHz operating frequency. The values used for comparison and analysis may be little bit inaccurate due to modeling of human head as layered spherical model which is far different from real time EM exposure to real human. However in this study, for both 0.9 GHz and 1.8 GHz frequencies, 1 g and 10 g SAR values get decreased with increased separation between mobile and head model. In this paper, a compact dual band E shaped antenna with comparatively low average SAR and better Bandwidth is introduced for GSM application in handheld devices. Simulations were performed for different scenarios. The antenna in free space and the handset device placed close to a user head model. The return loss was better than 25 dB at 900 MHz and 1800 MHz with bandwidth of 142 MHz and 218 MHz in the lower band and in the upper band respectively as compared to existing antennas. The 10 dB bandwidth of proposed E shaped antenna covers GSM 850/ GSM 900/ GSM 1800 bands. Further, the average specific absorption rate, due to human interaction with handset is well below the specified limit. The obtained results, including surface current distributions, S-parameters, radiation patterns, SAR values, have demonstrated that the proposed antenna design is suitable for GSM and 4G network and is able to achieve good performance for real world scenario. Corbett R, Lam EY (2012) "Mobile-Phone Antenna Design", IEEE Antennas and Propagation Magazine, Vol. 54, No. 4 Yong J (2012) "Compact Dual-Band CPW-Fed Zeroth-Order Resonant Monopole Antennas "IEEE Antennas and Wireless Propagation letters, vol. 11 Fuguo Z, Steven G, Anthony TS H, Abd-Alhameed RA, See CH, Tim WC B, Jianzhou L, Gao W, Jiadong X (2014) Ultra-Wideband Dual-Polarized Patch Antenna With Four Capacitively Coupled Feeds. IEEE Trans Antennas Propag 62:5 Hassan Tariq C, Muhammad N, Abbasi QH, Yi H, AlJa'afreh SS (2013) "Compact Low-Profile Dual-Port Single Wideband Planar Inverted-F MIMO Antenna", IEEE Antennas and Wireless Propagation Letters, VOL. 12 Rowley JT, Rod BW (1999) Performance of Shorted Microstrip Patch Antennas for Mobile Communications Handsets At 1800 MHz'. IEEE Trans Antennas Propag 47:5 Jimmy T, Yih-Chien Chen C-YW (2012) "Dual- Band Planar Inverted F Antenna for Application in ISM, HIPERLAN, UNII and WiMAX", Proceedings of APMC Luyi L, Jonathan R, Richard L (2013) Tunable Multiband Handset Antenna Operating at VHF and UHF Bands. IEEE Trans Antennas Propag 61:7 Md Faruk A, Sujoy M, Sudhabindu R (2009) "SAR Analysis in Human Head Model Exposed to Mobile Base-Station Antenna for GSM-900 band", Loughborough Antennas & Propagation Conference M. Ali, R. A. Sadler and G. J. Hayes (2002), "A Uni quely Packaged Internal Inverted-F Antenna for Bluetooth or Wireless LAN Application", IEEE antennas and wirele ss propagation letters, vol.1. Qinjiang R, Kelce W (2011) Design, Modeling, And Evaluation Of A Multiband MIMO/Diversity Antenna System For Small Wireless Mobile Terminals'. IEEE Trans Components Packaging Manuf Technol 01:03 Simulation tool, 'General Electromagnetic Simulator', http://www.2comu.com Sooman P, Juyoung J, Yeongseog L (2004) "Temperature Rise in the Human Head and Brain for Portable Handsets at 900 and 1800 MHz",4′ Intemational Conference on Microwave and Millimeter Wave Technology Proceedings Mantash M, Collardey S (2013) Dual-band Wi-Fi and 4G LTE textile antenna". 7th European Conference on Antennas and Propagation (EuCAP) 8–12:422–425 Zuxing L, Minseok H, Jaehoon C (2012) Compact dual-band MIMO antenna for 4G USG dongle application". Microw Opt Technol Lett 54(3):744–748 http://en.wikipedia.org/wiki/List_of_LTE_networks // LTE network frequency range – world wide//. M.E. de Cos, M.Mantash, "Dual-band coplanar waveguide feed smiling monopole antenna for Wi-FI and 4G LTE applications", IET Microwave , Antennas and Propagation, Vo. 7, Issue No. 9 Electromagnetic Simulation Software. Solutions for Design Engineers and EM Simulation Professionals, www.remcom.com // comparison about FEM, FDTD and FEM-FDTD//. Rozlan Alias, Simulation of radiation performance for mobile phones using a hybrid FEM-TFDTD computational technique. 4th Int. Conference on Modeling, Simulation and applied optimization (ICMSAO), 2011. doi:10.11.1109/ICMSAOL2011, pp. 577–5956 Erdem OFLI, Chung-Huan LI (2008) Analysis and optimization of mobile phone antenna radiation performance in the presence of head and hand phantoms". Turk J Elect Engg 16:1 Claudio R, Fernandez (2004) FDTD simulations and measurement for cell phone with planar antennas". Ann Telecommun 59(9–10):1012–2030 Md Mahabub A, Md Suaibur R (2013) A Connected E-Shape and U-Shape Dual-Band Patch Antenna for Different Wireless Applications". Int J Sci Eng Res 4:1 Wen Piao L, Dong-Hua Y, Zong-De L (2014) Compact Dual-Band Planar Inverted-e-Shaped Antenna Using Defected Ground Structure". Int J Antennas Propagation 937423:10 Authors would like to thank Doctors of various hospitals for their valuable explanations concerning human tissue property. Assistant Director (Administration), All India Council for Technical Education, New Delhi, India Balamurugan Rajagopal Post Graduate Engineer, Anna University (Regional Centre Coimbatore), Tamil Nadu, India Lalithambika Rajasekaran Search for Balamurugan Rajagopal in: Search for Lalithambika Rajasekaran in: Corresponding authors Correspondence to Balamurugan Rajagopal or Lalithambika Rajasekaran. The authors declare that they have no competing interest. In this work, BR extended his constant guidance throughout the work, till final document verification. LR and BR together put forth the research idea and done simulation and documentation work. Both the author has read the final manuscript. Balamurugan Rajagopal: was born in Dindigul, Tamil Nadu, India on 1981. He is presently serving as an Assistant Director(Administration) in All India Council for Technical Education (AICTE), New Delhi on deputation from Department of Electrical and Electronics Engineering, Anna University, Regional Centre Coimbatore* where he has been serving as Assistant Professor (Power Electronics and Drives) since 2008(August) onwards. He completed B.Tech. degree in the specialization of Electronics and Instrumentation Engineering at Dr. B.R. Ambedkar National Institute of Technology(NIT), Jalandar (Punjab) in 2002 and M.E. degree (Power Electronics and Drives) at Government College of Technology, Coimbatore, Tamil Nadu in 2005. He joined Anna University, Chennai, India as Assistant Project Manager, Centre for Intellectual Property Rights and Trade Marks in February, 2006 and served until May, 2007. In June, 2007, he joined Anna University, Coimbatore as Project Manager, Centre for Intellectual Property Rights and in October, 2007, he was appointed as Assistant Professor (Faculty of Engineering and Technology, in which he continued prior to becoming to the present position of Assistant Professor (Power Electronics and Drives) in August, 2008. He also served as Asst. Controller of Examinations and Assistant Director (Centre for University Industry Collaboration) in Anna University of Technology, Coimbatore (Formerly Anna University, Coimbatore). He was granted two Erasmus Mundus Fellowships(Heritage & India4EUII), funded by European Commission to undertake Staff Mobility Programme in Univ. of Seville, Spain and Aalto University, Finland respectively in 2013. He successfully completed Staff Mobility Programme in Univ. of Seville, Spain in Oct., 2013. He is a member of IEEE and various IEEE societies (ComSoc, CIS, CSS, EDS, EMCS, IES, I&MS, MTT-S, PELS & RAS). His fields of interests are Intellectual Property Rights, Electronics, Information & Communication Technologies (E, I&CT), Mobile Phone Radiation Issues, Electrical Drives, Embedded Control Systems, Power Electronics and VLSI Designs. Note: *-Formerly Anna University, Coimbatore in 2007 and renamed as Anna University of Technology, Coimbatore in 2010. Later merged with Anna University, Chennai on 1st August, 2012 by Government of Tamil Nadu. Lalithambika Rajasekaran: was born on 1987 in Erode, Tamil Nadu, India. She is Post Graduate Engineer. She was awarded B.E. and M.E. degrees by Anna University, Chennai. She completed M.E. (Electrical Drives and Embedded Control) by securing Gold Medal with 1st Rank during 2011–2013 in the Department of Electrical and Electronics Engineering, Anna University- Regional Centre Coimbatore, Tamil Nadu and B.E. (Electronics and Communication Engineering) during 2005–2009 in Anna University, Chennai. She is a member of IEEE and various IEEE societies (ComSoc, CSS, EDS, IES, SPS, MTT-S & AP-S). Her fields of interests are Bio- Electromagnetics, Antennas for Wireless applications, Power Electronics for Renewable Energy Systems, Embedded Control Systems. Dual band antenna GSM (Global System for Mobile communication) PIFA (Planar Inverted F-antenna) S - Parameter Specific Absorption Rate (SAR) Finite Difference Time Domain (FDTD) General Electro Magnetic Simulator (GEMS)
CommonCrawl
On Omri Sarig's work on the dynamics on surfaces JMD Home Loci in strata of meromorphic quadratic differentials with fully degenerate Lyapunov spectrum January 2014, 8(1): 25-59. doi: 10.3934/jmd.2014.8.25 Counting orbits of integral points in families of affine homogeneous varieties and diagonal flows Alexander Gorodnik 1, and Frédéric Paulin 2, School of Mathematics, University of Bristol, Bristol BS8 1TW Département de mathématique, UMR 8628 CNRS, Bât. 425, Université Paris-Sud, 91405 ORSAY Cedex, France Received June 2013 Published July 2014 In this paper, we study the distribution of integral points on parametric families of affine homogeneous varieties. By the work of Borel and Harish-Chandra, the set of integral points on each such variety consists of finitely many orbits of arithmetic groups, and we establish an asymptotic formula (on average) for the number of the orbits indexed by their Siegel weights. In particular, we deduce asymptotic formulas for the number of inequivalent integral representations by decomposable forms and by norm forms in division algebras, and for the weighted number of equivalence classes of integral points on sections of quadrics. Our arguments use the exponential mixing property of diagonal flows on homogeneous spaces. Keywords: homogeneous variety, counting, norm form, exponential decay of correlation., Siegel weight, diagonalizable flow, Integral point, mixing, decomposable form. Mathematics Subject Classification: Primary: 37A17, 37A45; Secondary: 14M17, 20G20, 14G05, 11E2. Citation: Alexander Gorodnik, Frédéric Paulin. Counting orbits of integral points in families of affine homogeneous varieties and diagonal flows. Journal of Modern Dynamics, 2014, 8 (1) : 25-59. doi: 10.3934/jmd.2014.8.25 T. Apostol, Introduction to Analytic Number Theory,, Undergrad. Texts Math., (1976). Google Scholar M. Babillot, Points entiers et groupes discrets: De l'analyse aux systèmes dynamiques,, in Rigidité, (2002), 1. Google Scholar B. Bekka, P. de la Harpe and A. Valette, Kazhdan's Property (T),, New Math. Mono., (2008). doi: 10.1017/CBO9780511542749. Google Scholar Y. Benoist and H. Oh, Effective equidistribution of $S$-integral points on symmetric varieties,, Ann. Inst. Fourier (Grenoble), 62 (2012), 1889. doi: 10.5802/aif.2738. Google Scholar A. Borel, Ensembles fundamentaux pour les groupes arithmétiques,, in Colloque sur la Théorie des Groupes Algébriques (Bruxelles, (1962), 23. Google Scholar A. Borel, Introduction aux Groupes Arithmétiques,, Publications de l'Institut de Mathématique de l'Université de Strasbourg, (1341). Google Scholar A. Borel, Linear Algebraic Groups,, 2nd edition, (1991). doi: 10.1007/978-1-4612-0941-6. Google Scholar A. Borel, Reduction theory for arithmetic groups,, in Algebraic Groups and Discontinuous Subgroups (eds. A. Borel and G. D. Mostow) (Proc. Sympos. Pure Math. Boulder, (1965), 20. Google Scholar A. Borel and Harish-Chandra, Arithmetic subgroups of algebraic groups,, Ann. of Math. (2), 75 (1962), 485. doi: 10.2307/1970210. Google Scholar A. Borel and L. Ji, Compactifications of Symmetric and Locally Symmetric Spaces,, Mathematics: Theory & Applications, (2006). Google Scholar M. Borovoi and Z. Rudnick, Hardy-Littlewood varieties and semisimple groups,, Invent. Math., 119 (1995), 37. doi: 10.1007/BF01245174. Google Scholar L. Clozel, Démonstration de la conjecture $\tau$,, Invent. Math., 151 (2003), 297. doi: 10.1007/s00222-002-0253-8. Google Scholar H. Cohn, A Second Course in Number Theory,, Wiley, (1962). Google Scholar J.-L. Colliot-Thélène and F. Xu, Brauer-Manin obstruction for integral points of homogeneous spaces and representation by integral quadratic forms,, Compositio Math., 145 (2009), 309. doi: 10.1112/S0010437X0800376X. Google Scholar M. Cowling, Sur les coefficients des représentations unitaires des groupes de Lie simples,, in Analyse Harmonique sur les Groupes de Lie (Sém. Nancy-Strasbourg 1976-1978), (1979), 1976. Google Scholar W. Duke, Z. Rudnick and P. Sarnak, Density of integer points on affine homogeneous varieties,, Duke Math. J., 71 (1993), 143. doi: 10.1215/S0012-7094-93-07107-4. Google Scholar A. Eskin and C. McMullen, Mixing, counting, and equidistribution in Lie groups,, Duke Math. J., 71 (1993), 181. doi: 10.1215/S0012-7094-93-07108-6. Google Scholar A. Eskin, S. Mozes and N. Shah, Unipotent flows and counting lattice points on homogeneous varieties,, Ann. of Math. (2), 143 (1996), 253. doi: 10.2307/2118644. Google Scholar A. Eskin and H. Oh, Representations of integers by an invariant polynomial and unipotent flows,, Duke Math. J., 135 (2006), 481. doi: 10.1215/S0012-7094-06-13533-0. Google Scholar A. Eskin, Z. Rudnick and P. Sarnak, A proof of Siegel's weight formula,, Internat. Math. Res. Notices, 5 (1991), 65. doi: 10.1155/S1073792891000090. Google Scholar W. T. Gan and H. Oh, Equidistribution of integer points on a family of homogeneous varieties: A problem of Linnik,, Compositio Math., 136 (2003), 323. doi: 10.1023/A:1023256605535. Google Scholar A. Gorodnik and H. Oh, Rational points on homogeneous varieties and equidistribution of adelic periods,, Geom. Funct. Anal., 21 (2011), 319. doi: 10.1007/s00039-011-0113-z. Google Scholar K. Györy, On the distribution of solutions of decomposable form equations,, in Number Theory in Progress, (1997), 237. Google Scholar M. Hirsch, Differential Topology,, Grad. Texts Math., (1976). Google Scholar D. Kelmer and P. Sarnak, Strong spectral gaps for compact quotients of products of $ PSL(2,\RR)$,, J. Euro. Math. Soc., 11 (2009), 283. doi: 10.4171/JEMS/151. Google Scholar T. Kimura, Introduction to Prehomogeneous Vector Spaces,, Transl. Math. Mono., (2003). Google Scholar D. Kleinbock and G. Margulis, Bounded orbits of nonquasiunipotent flows on homogeneous spaces,, in Sinaĭ's Moscow Seminar on Dynamical Systems, (1996), 141. Google Scholar D. Kleinbock and G. Margulis, Logarithm laws for flows on homogeneous spaces,, Invent. Math., 138 (1999), 451. doi: 10.1007/s002220050350. Google Scholar H. Koch, Number Theory: Algebraic Numbers and Functions,, Grad. Stud. Math., (2000). Google Scholar S. Lang, Algebraic Number Theory,, Second edition, (1994). doi: 10.1007/978-1-4612-0853-2. Google Scholar D. N. Lehmer, Asymptotic evaluation of certain totient sums,, Amer. J. Math., 22 (1900), 293. doi: 10.2307/2369728. Google Scholar A. Nevo, Exponential volume growth, maximal functions on symmetric spaces, and ergodic theorems for semi-simple Lie groups,, Erg. Theo. Dyn. Syst., 25 (2005), 1257. doi: 10.1017/S0143385704000951. Google Scholar H. Oh, Hardy-Littlewood system and representations of integers by an invariant polynomial,, Geom. Funct. Anal., 14 (2004), 791. doi: 10.1007/s00039-004-0475-6. Google Scholar H. Oh, Orbital counting via mixing and unipotent flows,, in Homogeneous Flows, (2010), 339. Google Scholar E. Peyre, Obstructions au principe de Hasse et à l'approximation faible,, Séminaire Bourbaki, 299 (2005), 165. Google Scholar J. Parkkonen and F. Paulin, Équidistribution, comptage et approximation par irrationnels quadratiques,, J. Mod. Dyn., 6 (2012), 1. doi: 10.3934/jmd.2012.6.1. Google Scholar J. Parkkonen and F. Paulin, Counting common perpendicular arcs in negative curvature,, preprint, (). Google Scholar J. Parkkonen and F. Paulin, On the arithmetic of crossratios and generalised Mertens' formulas,, to appear in Ann. Fac. Scien. Toulouse, (2013). Google Scholar V. Platonov and A. Rapinchuck, Algebraic Groups and Number Theory,, Pure and Applied Mathematics, (1994). Google Scholar M. Raghunathan, Discrete Subgroups of Lie Groups,, Ergebnisse der Mathematik und ihrer Grenzgebiete, (1972). Google Scholar I. Reiner, Maximal Orders,, Academic Press, (1975). Google Scholar P. Sarnak, Asymptotic behavior of periodic orbits of the horocycle flow and Eisenstein series,, Comm. Pure Appl. Math., 34 (1981), 719. doi: 10.1002/cpa.3160340602. Google Scholar M. Sato and T. Shintani, On zeta functions associated with prehomogeneous vector spaces,, Ann. of Math. (2), 100 (1974), 131. doi: 10.2307/1970844. Google Scholar W. M. Schmidt, Norm form equation,, Ann. of Math. (2), 96 (1972), 526. doi: 10.2307/1970824. Google Scholar J.-P. Serre, Cours d'arithmetique,, Collection SUP:, (1970). Google Scholar C. L. Siegel, On the theory of indefinite quadratic forms,, Ann. of Math. (2), 45 (1944), 577. doi: 10.2307/1969191. Google Scholar C. L. Siegel, The average measure of quadratic forms with given determinant and signature,, Ann. of Math. (2), 45 (1944), 667. doi: 10.2307/1969296. Google Scholar T. A. Springer, Linear algebraic groups,, in Algebraic Geometry IV (eds. A. Parshin and I. Shavarevich), (1994), 1. doi: 10.1007/978-3-662-03073-8. Google Scholar J. L. Thunder, Decomposable form inequalities,, Ann. of Math. (2), 153 (2001), 767. doi: 10.2307/2661368. Google Scholar V. E. Voskresenskiĭ, Algebraic Groups and their Birational Invariants,, Transl. Math. Mono., (1998). Google Scholar A. Weil, L'intégration dans les groupes topologiques et ses applications,, Hermann, (1965). Google Scholar Jacinto Marabel Romo. A closed-form solution for outperformance options with stochastic correlation and stochastic volatility. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1185-1209. doi: 10.3934/jimo.2015.11.1185 David Iglesias-Ponte, Juan Carlos Marrero, David Martín de Diego, Edith Padrón. Discrete dynamics in implicit form. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1117-1135. doi: 10.3934/dcds.2013.33.1117 Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651 Abbas Bahri. Recent results in contact form geometry. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 21-30. doi: 10.3934/dcds.2004.10.21 Vivi Rottschäfer. Multi-bump patterns by a normal form approach. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 363-386. doi: 10.3934/dcdsb.2001.1.363 Rong Dong, Dongsheng Li, Lihe Wang. Regularity of elliptic systems in divergence form with directional homogenization. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 75-90. doi: 10.3934/dcds.2018004 Gary Lieberman. Nonlocal problems for quasilinear parabolic equations in divergence form. Conference Publications, 2003, 2003 (Special) : 563-570. doi: 10.3934/proc.2003.2003.563 Todor Mitev, Georgi Popov. Gevrey normal form and effective stability of Lagrangian tori. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 643-666. doi: 10.3934/dcdss.2010.3.643 Emmanuel Hebey, Jérôme Vétois. Multiple solutions for critical elliptic systems in potential form. Communications on Pure & Applied Analysis, 2008, 7 (3) : 715-741. doi: 10.3934/cpaa.2008.7.715 Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109 Tony Lyons. Geophysical internal equatorial waves of extreme form. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4471-4486. doi: 10.3934/dcds.2019183 Heide Gluesing-Luerssen. Partitions of Frobenius rings induced by the homogeneous weight. Advances in Mathematics of Communications, 2014, 8 (2) : 191-207. doi: 10.3934/amc.2014.8.191 Jana Majerová. Correlation integral and determinism for a family of $2^\infty$ maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5067-5096. doi: 10.3934/dcds.2016020 Richard Miles, Thomas Ward. A directional uniformity of periodic point distribution and mixing. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1181-1189. doi: 10.3934/dcds.2011.30.1181 Xiwang Cao, Hao Chen, Sihem Mesnager. Further results on semi-bent functions in polynomial form. Advances in Mathematics of Communications, 2016, 10 (4) : 725-741. doi: 10.3934/amc.2016037 Sigve Hovda. Closed-form expression for the inverse of a class of tridiagonal matrices. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 437-445. doi: 10.3934/naco.2016019 M. Matzeu, Raffaella Servadei. A variational approach to a class of quasilinear elliptic equations not in divergence form. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 819-830. doi: 10.3934/dcdss.2012.5.819 David Maxwell. Kozlov-Maz'ya iteration as a form of Landweber iteration. Inverse Problems & Imaging, 2014, 8 (2) : 537-560. doi: 10.3934/ipi.2014.8.537 Maria Rosaria Lancia, Valerio Regis Durante, Paola Vernole. Asymptotics for Venttsel' problems for operators in non divergence form in irregular domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1493-1520. doi: 10.3934/dcdss.2016060 Luciano Viana Felix, Marcelo Firer. Canonical- systematic form for codes in hierarchical poset metrics. Advances in Mathematics of Communications, 2012, 6 (3) : 315-328. doi: 10.3934/amc.2012.6.315 PDF downloads (4) HTML views (0) Alexander Gorodnik Frédéric Paulin
CommonCrawl
Skip to main content Skip to sections International Conference on Business Information Systems BIS 2019: Business Information Systems pp 45-54 | Cite as Time Series Forecasting by Recommendation: An Empirical Analysis on Amazon Marketplace Álvaro Gómez-Losada Néstor Duch-Brown First Online: 18 May 2019 Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 353) This study proposes a forecasting methodology for univariate time series (TS) using a Recommender System (RS). The RS is built from a given TS as only input data and following an item-based Collaborative Filtering approach. A set of top-N values is recommended for this TS which represent the forecasts. The idea is to emulate RS elements (the users, items and ratings triple) from the TS. Two TS obtained from Italy's Amazon webpage were used to evaluate this methodology and very promising performance results were obtained, even the difficult environment chosen to conduct forecasting (short length and unevenly spaced TS). This performance is dependent on the similarity measure used and suffers from the same problems that other RSs (e.g., cold-start). However, this approach does not require high computational power to perform and its intuitive conception allows for being deployed with any programming language. Collaborative Filtering Time series Forecasting Data science The original version of this chapter was revised: It has been changed to open access under a CC BY 4.0 license and the copyright holder is now "The Author(s)". The book has also been updated with these changes. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-20485-3_42 Download conference paper PDF Broadly speaking, autocorrelation is the comparison of a time series (TS) with itself at a different time. Autocorrelation measures the linear relationship between lagged values of a TS and is central to numerous forecasting models that incorporate autoregression. In this study, this idea is borrowed and incorporated to a recommender system (RS) with a forecasting purpose. RSs apply knowledge discovery techniques to the problem of helping users to find interesting items. Among the wide taxonomy of recommendation methods, Collaborative Filtering (CF) [1] is the most popular approach for RSs designs [2]. CF is based on a intuitive paradigm by which items are recommended to an user looking at the preferences of the people this user trusts. User-based or item-based [3] recommendations are two common approaches for performing CF. The first evaluates the interest of a user for an item using the ratings for this item by other users, called neighbours, that have similar rating patterns. On the other hand, item-based CF considers that two items are similar if several users of the system have rated these items in a similar fashion [4]. In both cases, the first task when building a CF process is to represent the user-items interactions in the form of a rating matrix. The idea is that given a rating data by many users for many items it can be predicted a user's rating for an item not known to the user, or identify a set of N items that user will like the most (top-N recommendation problem). The latter is the approach followed in this study. The goal of this study is to introduce a point forecasting methodology for univariate TS using a item-based CF framework. In particular, to study the behaviour of this methodology in short lenght and unvenly spaced TS. On one side, the distinct values of a TS are considered the space of users in the RS. On the other, items are represented by the distinct values of a lagged version of this TS. Ratings are obtained by studying the frequencies of co-occurrences of values from both TS. Basically, the forecast is produced after averaging the top-N set of recommended items (distinct values of the shifted TS) to a particular user (a given value in the original TS). 2 Related Work TS forecasting has been incorporated into recommendation processes in several works to improve the users' experience in e-commerce sites. However, to the best of the authors' knowledge, the use of a RS framework as a tool for producing forecasts in TS is new in literature. 2.1 Item-Based CF Approach This section describes the basic and notation of an standard item-based CF RS, with a focus on the approach followed in this study. Mostly, CF techniques use a database as input data in the form of a user-item matrix \(\mathbf {R}\) of ratings (preferences). In a typical item-based scenario, there is a set of m users \(\mathcal {U}=\{ u_{1}, u_{2},...,u_{m} \}\), a set of n items \(\mathcal {I}=\{ i_{1}, i_{2},..., i_{n} \}\), and a user-item matrix Open image in new window , with \(r_{jk}\) representing the rating of the user \(u_{j}\) (\(1 \le j \le m\)) for the item \(i_{k}\) (\(1 \le k \le n\)). In \(\mathbf {R}\), each row represents an user \(u_{j}\), and each column represents an item \(i_{k}\). Some filtering criteria may be applied by removing from \(\mathbf {R}\) those \(r_{jk}\) entries below a predefined b Open image in new window threshold. One of the novelties of this study is the creation of an \(\mathbf {R}\) matrix from a TS, which is explained in the next Sect. 3. Basically, \(\mathbf {R}\) is created after using the TS under study and its lagged copy, and considering them as the space of users and items, respectively. The rating values (\(r_{jk}\)) are obtained by cross-tabulating both series. Instead of studying their relation with an autocorrelation approach, the frequency of co-occurrence of values is considered. In order to clarify how the transformation of a TS forecasting problem is adapted in this study using a RS, Table 1 identifies some assumed equivalences. Some equivalences used in this study to build the forecasting recommender system (RS) from a given time series (TS). RS equivalence Set of distinct values in TS \(\mathcal {U}\) Set of users with cardinality m Set of distinct values in TS shifted \(\mathcal {I}\) Set of items with cardinality n Two distinct values in the TS \(u_{j}, u_{l}\) A pair of users Two distinct values of the shifted TS \(i_{i}, i_{j}\) A pair of items Number of times a distinct value in the TS and its shifted version co-occurs \(r_{jk}\) Rating of user \(u_{j}\) on item \(i_{k}\) TS value to which perform a forecasting \(u_{a}\) Active user to which recommend an item The model-building step begins with determining a similarity between pair of items from \(\mathbf {R}\). Similarities are stored in a new matrix Open image in new window , where \(s_{ij}\) represent a similarity between items i and j (\(1 \le i,j \le n\)). \(s_{ij}\) is obtained after computing a similarity measure on those users who have rated i and j items. Sometimes, to compute the similarity is set a minimum number of customers that have selected the (i, j) pair. This quantity will be referred as the c Open image in new window threshold. Traditionally, among the most commonly similarity measures used are the Pearson correlation, cosine, constraint Pearson correlation and mean squared differences [5]. In this study it will be used the Cosine and Pearson correlation measures, but also, the Otsuka-Ochiai coefficient, which is borrowed from Geosciences [8] and used by leading online retailers like Amazon [9]. Some notation follows at this stage of the modelling. The vector of ratings provided for item i is denoted by \(\mathbf {r}_{i}\) and \(\bar{r}_{i}\) is the average value of these ratings. The set of users who has rated the item i is denoted by \(\mathcal {U}_{i}\), the item j by \(\mathcal {U}_{j}\), and the set of users who have rated both by \(\mathcal {U}_{ij}\). Forecasting. The aim of item-based algorithm is to create recommendations for a user, called the active user \(u_{a} \in \mathcal {U}\), by looking into the set of items this user has rated, \(I_{u_{a}}\in \mathcal {I}\). For each item \(i \in I_{u_{a}}\), just the k items wich are more similar are retained in a set \(\mathcal {S}(i)\). Then, considering the rating that \(u_{a}\) has made also on items in \(\mathcal {S}(i)\), a weighted prediction measure can be applied. This approach returns a series of estimated rating for items different from those in \(I_{u_{a}}\) that can be scored. Just the top ranked items are included in the list of N items to be recommended to \(u_{a}\) (top-N recommended list). The k and N values has to be decided by the experimenter. In this study, each active user (\(u_{a}\)) was randomly selected from \(\mathcal {U}\) following a cross-validation scheme, which is explained next. To every \(u_{a}\), a set of recommendable items is presented (the forecast). The space of items to recommend is represented by the distinct values of the shifted TS. Since the aim is to provide a point forecast, the numerical values included in the top-N recommended list are averaged. Evaluation of the Recommendation. The basic structure for offline evaluation of RS is based on the train-test setup common in machine learning [5, 6]. A usual approach is to split users in two groups, the training and test sets of users ( \(\mathcal {U}_{\,train} \cup \,\, \mathcal {U}_{\,test}=\mathcal {U}\) ). Each user in \(\mathcal {U}_{\,test}\) is considered to be an active user \(u_{a}\). Item ratings of users in the test set are split into two parts, the query set and the target set. Once the RS is built on \(\mathcal {U}_{\,train}\), the RS is provided with the query set as user history and the recommendation produced is validated against the target set. It is assumed that if a RS performs well in predicting the withheld items (target set), it will also perform well in finding good recommendations for unknown items [7]. Typical metrics for evaluation accuracy in RS are root mean square error (RMSE), mean absolute error (MAE) or other indirect functions to estimate the novelty of the recommendation (e.g., serendipity). The MAE quality measure was used in this study. 3 Creation of the Rating Matrix This section describes the steps to transform a given TS (\(\mathbf {TS}\)) in a matrix of user-item ratings (\(\mathbf {R}\)). This should be considered the main contribution of this study. The remaining steps do not differs greatly from a traditional item-based approach beyond the necessary adaptations to the TS forecasting context. Since the aim it is to emulate \(\mathbf {R}\), the first step is to generate a space of users and items from TS. Thus, TS values are rounded to the nearest integer which will be called TS\(_{\mathbf {0}}\). The distinct (unique) values of TS\(_{\mathbf {0}}\) are assumed to be \(\mathcal {U}\). The second step is setting up \(\mathcal {I}\). For that, TS\(_{\mathbf {0}}\) is shifted forward in time Open image in new window time stamps. The new TS is called TS\(_{\mathbf {1}}\). The value of h depends on the forecasting horizon intended. Now, the distinct values of TS\(_{\mathbf {1}}\) set \(\mathcal {I}\). Once \(\mathcal {U}\) and \(\mathcal {I}\) have been set, they are studied as a bivariate distribution of discrete random variables. The last step is to compute the joint (absolute) frequency distribution of \(\mathcal {U}\) and \(\mathcal {I}\). Thus, it is assumed that \(n_{jk}\equiv r_{jk}\), where \(n_{jk}\) representes the frequency (co-occurrence) of the \(u_{j}\) value of TS\(_{\mathbf {0}}\) and \(i_{k}\) value of TS\(_{\mathbf {1}}\). These steps are summarized below. Open image in new window Some Considerations. The followed approach experiences the same processing problems that a conventional item-based approach, namely, sparsity in \(\mathbf {R}\) and cold start situations (user and items with low or inexistent numbers of ratings). In large RS from e-commerce retailers, usually \(n\ll m\). However, under this approach \(n\simeq m\). 4 Experimental Evaluation This section describes the sequential steps followed in this study for creating a item-based CF RS with the purpose of TS forecasting. 4.1 Data Sets Two short length TS were used to evaluate this methodology. These TS were obtained after scraping the best-selling products from the Italy's Amazon webpage between 5th April, 2018 and 14th December, 2018. The scraping process setting was sequential. It consisted of obtaining the price from the first to the last item of each category, and from the first category to the latest. The first TS (TS-1) was created after averaging the evolution of best-selling products' prices included in the Amazon devices category. The second TS (TS-2) represents the price change of the most dynamic best-selling product from this marketplace site. Italy's Amazon webpage has experienced changes during the eight month period of the crawling process due to commercial reasons. The main change is related to the number of best-selling products being showed for each category (e.g., 20, 50 or 100). This causes the period of the scraping cycles is not similar and the TS derived from this data (TS-1 and TS-2) are not equally spaced at time intervals. In this study, the data obtained in the scraping process will be considered as a sequence of time events. Therefore, it is worth to note that the values of TS-1 were obtained after averaging a different number of products (20, 50, or 100), according to the number of products shown by Amazon at different times. Their main characteristics are shown in Table 2. TS characteristics used in the methodology testing (P: percentile; min: minimum value, max: maximum value; in €). P75-P25 Distinct values (m) 4.2 Creation of the Rating Matrices A rating matrix \(\mathbf {R}\) was created for TS-1 and TS-2 according to the algorithm described in Sect. 3. As mentioned before, the different duration of scraping cycles causes unevenly spaced observations in the TS. Also, it represents an additional difficulty when setting a constant forecasting horizon (h) as described in the algorithm. Therefore, in this study, it will be necessary to assume that the forecasting horizon coincides with the duration of the scraping cycle, independently of its length in time. In practice, this means that the TS representing the items (TS\(_{\mathbf {1}}\)) was obtained after lagging one position forward in time with respect the original TS representing the users (TS\(_{\mathbf {0}}\)). After empirical observation, h approximately takes values 1 h, 2 h or 4 h depending on Amazon shows the 20, 50 or 100 best-selling products for each category, respectively. The lack of proportionality between the duration of scraping cycles and the number of best-selling product shown is explained by technical reasons in the crawling process (structure of the Amazon's webpage for each product affecting the depth of the scraping process). Those ratings with a value b \(\le 3\) were removed from the corresponding \(\mathbf {R}\). 4.3 Similarity Matrix Computation Three symmetric functions were used in this study to calculate different \(\mathbf {S}\) for TS-1 and TS-2. The Pearson correlation (1) and cosine (2) similarity functions are standards in the RS field. The third one, the Otsuka-Ochiai coefficient, incorporates a geometric mean in the denominator: $$\begin{aligned} s_{\mathbf {1}}(i,j)= & {} \frac{\sum _{u \in \mathcal {U}_{i,j}}(r_{u,i}-\bar{r}_{i})(r_{u,j}-\bar{r}_{j})}{\sqrt{\sum _{u \in \mathcal {U}_{i,j}}(r_{u,i}-\bar{r}_{i})^{2}} \,\, \sqrt{\sum _{u \in \mathcal {U}_{i,j}}(r_{u,j}-\bar{r}_{j})^{2}}} \end{aligned}$$ $$\begin{aligned} s_{\mathbf {2}}(i,j)= & {} \frac{\mathbf {r}_{i} \,\,\bullet \,\, \mathbf {r}_{j}}{\Vert \mathbf {r}_{i} \Vert _{2} \,\, \Vert \mathbf {r}_{j} \Vert _{2}}=\frac{\sum _{u \in \mathcal {U}_{i,j}} r_{u,i} \,\,\bullet \,\, r_{u,j}}{\sqrt{\sum _{u \in \mathcal {U}_{i,j}} r^{2}_{u,i}} \,\, \sqrt{\sum _{u \in \mathcal {U}_{i,j}} r^{2}_{u,j}}} \end{aligned}$$ $$\begin{aligned} s_{\mathbf {3}}(i,j)= & {} \frac{\mid \mathcal {U}_{ij} \mid }{\sqrt{\mid \mathcal {U}_{i} \mid \, \mid \mathcal {U}_{j}\mid }} \\ \nonumber \end{aligned}$$ where \(\bullet \), \(\Vert \cdot \Vert _{2}\) and \(\mid \cdot \mid \) denote the dot-product, \(l_{2}\) norm, and cardinality of the set, respectively. These similarity measures were calculated when the minimum number of user rating a given items was c \(\ge 3\), and set the value of \(k=3\). 4.4 Generation of the Top N-Recommendation This step begins by looking at the set of items the active user \(u_{a}\) has rated, \(I_{u_{a}}\). In particular, the interest is to predict ratings for those items \(j \notin I_{u_{a}}\) rated by user \(u_{a}\). Then estimated rating (\(\hat{r}_{u_{a},j}\)) for a given item \(j \notin I_{u_{a}}\) was calculated according to (4), where \(\mathcal {S}(j)\) denotes the items rated by the user \(u_{a}\) most similar to item j: $$\begin{aligned} \hat{r}_{u_{a},j}= & {} \frac{1}{\sum _{i \in \mathcal {S}(j)} \, s(i,j) } \, {\sum _{i \in \mathcal {S}(j)} \, s(i,j) \,\, r_{u_{a},i}} \end{aligned}$$ The value of \(\hat{r}_{(u,j)}\) can be considered a score that is calculated for each item not in \(I_{u_{a}}\). Finally, the highest scored items are included in the top-N recommended list. 4.5 Evaluation The set of users (\(\mathcal {U}\)) was splitted following a 80:20 proportion for obtaining the training and test sets of users (\(\mathcal {U}_{\,train}\) and \(\mathcal {U}_{\,test}\)), respectively. Every user in \(\mathcal {U}_{\,test}\) (20% of distinct values in TS\(_{0}\)) was considered an active user (\(u_{a}\)) to whom recommend a set of items. From the history of each \(u_{a}\), again the proportion 80:20 proportion was used to obtain the query and target sets, respectively. The produced recommendation was validated against the target set of each \(u_{a}\). Evaluation Metric. The MAE value was obtained according to (5): $$\begin{aligned} MAE= & {} \frac{\sum _{i=1}^{n} \mid e_{i} \mid }{n} \end{aligned}$$ where n represent the number of active users to whom a recommendation has been suggested. The N value to generate a top-N recommendation was set dynamically according to the length of items in the target set for each \(u_{a}\). Thus, the value of e is the difference between the average value of the top-N recommended items and the average value of the items in the target set. 4.6 Performance Results The MAE results for assessing the quality of the forecasting proposal is shown in Table 3, for the two analysed TS and three similarity measures studied. MAE performance on both TS and the different similarity measures (\(s_{\mathbf {1}}\), \(s_{\mathbf {2}}\) and \(s_{\mathbf {3}}\): Pearson correlation, cosine and Otsuka-Ochiai similarities, respectively), in €. \(s_{\mathbf {1}}\) \(s_{\mathbf {1}} \) It can be seen that Otsuka-Ochiai similarity (\(s_{\mathbf {3}}\)) yields a better performance on both TS studied. It is necessary to remind the characteristics of the TS used for testing this methodology (TS-1 and TS-2). They are characterized by unevenly spaced observations in the TS, but also, the variable forecasting horizon provided by the data acquisition environment. This represents a very complex environment for performing forecasting. In the case of forecasting the average price for a given category, or the price for a given product, the forecasting results could be considered an estimation of the trend of such prices (uptrend or downtrend) more than aiming to forecast an exact value of such prices. Other experiences have been accomplished to test this methodology with ozone (O\(_{3}\)) TS (results now shown). These experiences with conventional TS (evenly spaced observations and constant forecasting horizon) has yield a very good results in term of forecasting. Therefore, TS-1 and TS-2 should be considered to represent an extreme environment in which perform forecasting. One of the advantage of this approach is that does not require high computational power to operate, due to the analysis of the distinct rounded values of a given TS, which is independent of its length. Besides, the proposed approach is very intuitive and allows for further developments to be included and being deployed using any programming language. 4.7 Computational Implementation The computational implementation was accomplished using ad hoc designed Python functions except those specified in Table 4. The first two purposes correspond to the creation of the Rating matrix (\(\mathbf {R}\)), and the remaining ones to the similarity matrix computation (\(\mathbf {S}\)). In particular, the last two ones are used in the computation of the cosine similarity measure used in this work. Python functions for specific tasks. TS shifting Cross-tabulation crosstab To iterate over pairs of items itertools \(l_{2}\) norm numpy.linalg 5 Conclusions This study aims to produce a point forecast for a TS adopting a RS approach, in particular an item-based CF. To that end, basic elements in a RS (the users, items and ratings triple) was emulated using a TS as only input data. An autocorrelation-based algorithm is introduced to create a recommendation matrix from a given TS. This methodology was tested using two TS obtained from Italy's Amazon webpage. Performance results are promising even the analyzed TS represent a very difficult setting in which to conduct forecasting. This is due to the TS are unevenly spaced and the forecasting horizon is not constant. Application of classical forecasting approaches (e.g., autoregression models) to this type of TS is not possible mainly due to the irregular time stamps in which observations are obtained. Thus, the introduced algorithm should be considered a contribution to the forecasting practice in both short length and unevenly spaced TS. Complementary, computational time required to obtain forecasting estimates is short due to such estimates are obtained considering distinct values of the TS are not all the values forming the TS. Further developments include to consider contextual information when building the RS and transforming the sequence of events to a TS with evenly spaced data. The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission. Goldberg, D., Nichols, D., Oki, B.M., Terry, D.: Using collaborative filtering to weave an information tapestry. Commun. ACM Spec. Issue Inf. Filter. 35, 61–70 (1992). https://doi.org/10.1145/138859.138867CrossRefGoogle Scholar Sharma, R., Gopalani, D., Meena, Y.: Collaborative filtering-based recommender system: approaches and research challenges. In: 3rd International Conference on Computational Intelligence & Communication Technology, pp. 1–6. IEEE Press (2017). https://doi.org/10.1109/CIACT.2017.7977363 Sarwar, B., Karypis, G., Konstan, J., Reidl, J.: Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th International Conference on World Wide Web, Hong Kong, 2001, pp. 285–295. ACM, New York (2001). https://doi.org/10.1145/371920.372071 Desrosiers, C., Karypis, G.: A comprehensive survey of neighborhood-based recommendation methods. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.) Recommender Systems Handbook, pp. 107–144. Springer, Boston, MA (2011). https://doi.org/10.1007/978-0-387-85820-3_4CrossRefGoogle Scholar Bobadilla, J., Ortega, F., Hernando, A., GutiéRrez, A.: Recommender Systems Survey. Knowl. Based Syst. 46, 109–132 (2013). https://doi.org/10.1016/j.knosys.2013.03.012CrossRefGoogle Scholar Ekstrand, M.D., Riedl, J.T., Konstan, J.A.: Collaborative filtering recommender systems. Found. Trends Hum. Comput. Interact. 4(2), 81–173 (2010). https://doi.org/10.1561/1100000009CrossRefGoogle Scholar Haslher, M., Vereet, B.: recommenderlab: A Framework for Developing and Testing Recommendation Algorithms (2018). https://CRAN.R-project.org/package=recommenderlab Ochiai, A.: Zoogeographical studies on the soleoid fishes found in Japan and its neighhouring regions-II. Bull. Japan. Soc. Sci. Fish 22, 526–530 (1957). https://doi.org/10.2331/suisan.22.526CrossRefGoogle Scholar Jacobi, J.A., Benson, E.A., Linden, G.D.: Personalized recommendations of items represented within a database. US Patent US7113917B2 (to Amazon Technologies Inc.) (2006). https://patents.google.com/patent/US7113917B2/en Breese, J.S, Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pp. 43–52. Morgan Kaufmann Publishers (1998)Google Scholar Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. 1.Joint Research Centre, European CommissionSevilleSpain Cite this paper as: Gómez-Losada Á., Duch-Brown N. (2019) Time Series Forecasting by Recommendation: An Empirical Analysis on Amazon Marketplace. In: Abramowicz W., Corchuelo R. (eds) Business Information Systems. BIS 2019. Lecture Notes in Business Information Processing, vol 353. Springer, Cham Publisher Name Springer, Cham eBook Packages Computer Science Cite paper
CommonCrawl
Publication Ethics Agreement Login System Editori-Office Online production system English template Copyright transfer and licensing contract >> Recommend Articles >> All issues >> Most read >> Most downloaded Key Recommendation INTERANNUAL VARIATIONS OF FISH ASSEMBLAGE IN THE CHISHUI RIVER OVER THE LAST DECADE LIU Fei, LIU Ding-Ming, YUAN Da-Chun, ZHANG Fu-Bin, WANG Xue, ZHANG Zhi, QIN Qiang, WANG Jian-Wei, LIU Huan-Zhang IDENTIFICATION OF MITE TRANSPOSONS IN 33 FISH GENOMES HU Jing-Wen, SHAO Feng, ZHAO Lian-Peng, HAN Min-Jin, PENG Zuo-Gang Display Method: IDENTIFICATION OF THYROID HORMONE/THYROID HORMONE RECEPTOR TRαA-MEDIATED TARGET GENES IN PARALICHTHYS OLIVACEUS XIE Yan, FU Yuan-Shuai, SHI Zhi-Yi, JI Wen-Yao, TAO Jia-Kang , Available online [Abstract](20) [FullText HTML](18) [PDF 1109KB](0) To identify target genes mediated by thyroid hormone/thyroid hormone receptor TRαA in Paralichthys olivaceus, the CDS region of TRαA gene was cloned by RT-PCR and constructed a recombinant eukaryotic expression vector p3×Flag-TRαA. The recombinant plasmid was transfected into HEK293T cells, and the results demonstrated that the TRαA of Paralichthys olivaceus was successfully expressed in the mammalian expression system. Cell lysates transfected with recombinant plasmid were purified and filtered by Flag affinity chromatography to obtain the pure fusion protein TRαA. The dual luciferase reporter assay was performed in HEK293T cells that p3×Flag-TRαA was co-transfected with the reporter gene expression vector pGL3-Pro-atoh8-1517/1333/708 containing the candidate target promoter. The results support that the receptor of TRαA binds to two TRE recognition sequence specific to the -1497--688 promoter region of the atoh8 gene to initiate the gene transcription, and that atoh8 is a target gene directly mediated by thyroid hormone via TRαA. This study provides the basis for further exploration of thyroid hormone-regulated thyroid hormone receptor TRαA -mediated signaling pathways. ISOLATION AND CHARACTERIZATION OF A RHODOPSEUDOMONAS SP. STRAIN TIAN Ying-Ying, WU Xin-Qiang, JI Yan-Pei, FENG Xiu-Fang, XIAO Bang-Ding Among the known subgroups of anoxygenic phototrophic bacteria, the Purple Nonsulfur Bacteria were widely investigated for their versatile metabolisms. This study isolated and identified an anoxygenic phototrophic bacterium of strain PUF1 from the East Lake, Wuhan using the colony morphology, cell ultrathin structure, characteristic absorption spectra and phylogenetic analysis. The isolated strain was identified as Rhodopseudomonas sp. The bacterial cells are straight or slightly curved rod-shaped, 3.05—10.06 µm in length, 0.32—0.68 µm in diameter, with laminar membranes. The bacterial culture appears dark purple-red with major pigments being Bchl. a and carotenoids. With an initial pH of 6.0 to 8.0 and light intensity of 500 to 3000 lux, cell biomass at the stationary phase measured by the optical density (OD) showed no obvious difference, but pH higher than 8.0 in the liquid culture indeed decreased the maximal quantum yield of PS II (Fv/Fm). PUF1 had inconstant protein contents in its growth with the highest protein content of more than 60% related to the cellular dry weight in the stationary growth phase. The activity of ATPase continuously declined over the culture time. A plot of Fv/Fm fitted the single peak Gauss model with peak values in the log phase that showed certain relationships between the Fv/Fm and bacterial growing status. These findings provide important references for the study of physiological and biochemical characteristics of phototrophic bacteria. EFFECTS OF FLUCTUATING LIGHT ON THE GROWTH OF MICROCYSTIS AERUGINOSA UNDER LOW LIGHT AVERAGE INTENSITY SUN Xin, TANG Jia-Gang, LI Peng-Fei, SUN Jie, HE Fei-Fei, YOU Li, HE Jian-Cheng [Abstract](22) [FullText HTML](13) [PDF 992KB](0) To study the effects of fluctuating light on algae, the growth of Microcystis aeruginosa, a typical water bloom alga, was studied under different light conditions using an experimental device with a light intensity control system based on a single-chip system. Four light conditions were applied in this research: three fluctuating light groups (10 min FL, 1 h FL and 6 h FL) with different fluctuation cycles and an average light group (AL). The experimental results show that the cell densities of Microcystis aeruginosa in the the 6 h FL, 1 h FL and 10 min FL groups were 28.3% (P<0.05), 18.2% (P<0.05) and 7.7% (P>0.05), respectively, which were higher than that in the AL group. It was also found that the specific growth rate, Fv/Fm and rETR of Microcystis aeruginosa in the three fluctuating light groups were significantly higher than those in the AL group (P<0.05), and each index significantly increased with the length of the fluctuating light cycle, but the mean NPQ and carotenoids per unit dry weight showed the opposite relationship to fluctuating light. The results also showed that Microcystis aeruginosa could better regulate their photosynthesis mechanism to utilize light energy under fluctuating light than under constant light when the average light intensity was low, and the longer the fluctuating period was, the better the utilization efficiency. This implies that fluctuating low-intensity light can be used as a means to increase algae production. COMPARATIVE STUDY OF THE DIFFERENCE IN GLUCOSE AND DEXTRIN UTILIZATION IN THE CHINESE PERCH (SINIPERCA CHUATSI) REN Ping, LIANG Xu-Fang, FANG Liu, HE Shan, XIAO Qian-Qian, SHI Deng-Yong In this study, we compared the utilization of different carbohydrates in Chinese perch and further explored the molecular mechanism of carbohydrate utilization in carnivorous fish. Water, plasma, liver and muscle samples were collected at 0, 1, 2, 3, 4, 8, 12 and 24 h after Chinese perch were fed glucose and dextrin at a 1670 mg/kg dose. The parameters urine sugar, blood glucose, blood triglycerides, blood insulin, and liver and muscle glycogen and the mRNA expression levels of glucose metabolism-related genes were detected. The results showed the following: (1) Within 1-12 h after feeding, the blood glucose level was significantly higher in the glucose group than in the dextrin group, while the blood glucose and insulin levels were not significantly different between the two groups. (2) The triglyceride content at 2-4 h was higher in the dextrin group than in the glucose group, and the liver glycogen content at 1 h was significantly higher in the dextrin group than in the glucose group. Furthermore, the muscle glycogen content at 24 h was significantly higher in the dextrin group than in the glucose group. (3) One hour after feeding, the mRNA expression levels of glucokinase (GK), fatty acid synthetase (FAS), acetyl-CoA carboxylase type I (ACC1) and citrate synthase (CS) were significantly higher in the dextran group than in the glucose group, and the expression levels of glycogen synthase (GS) and CS mRNA at 8h were significantly lower in the dextrin group than in the glucose group. These results demonstrated that the utilization efficiency of dextrin was better than that of dextrose and that the intake of dextrin could promote the synthesis of glycogen and fat. DEVELOPMENT OF A SYBR GREEN REAL-TIME PCR ASSAY FOR DETECTION OF MYXOBOLUS HONGHUENSIS AND ITS APPLICATION LUO Dan, ZHAO Yuan-Li, LIU Xin-Hua, ZHANG Jin-Yong Pharyngeal myxosporidiosis caused by Myxobolus honghuensis is one of the most important limiting factors for the culture industry of the gibel carp, Carassius auratus gibelio (Bloch), in China. Pathogen abundance in the culture system directly determines the consequence of disease outbreaks in cultured aquatic animals. Therefore, a quantitative detection method to monitor the pathogen during the whole culture cycle will not only be applied in early diagnosis but also provide a technical basis for assessing disease risk and evaluating the effects of applied preventative and control measures. Here, a SYBR Green I real-time fluorescent quantitative PCR assay (QPCR) was developed to detect and monitor the abundance of M. honghuensis with a newly designed primer pair, HHF/R, which was based on the ITS loci. The specificity, sensitivity, repeatability and applicability of the assay were deeply evaluated. The results indicated that the developed method could specifically detect M. honghuensis without cross-reactivity with Henneguya doneci, Myxobolus nielii, Myxobolus pronini and Myxobolus wulii, a genetically similar pathogen causing hepatic myxosporidiosis of gibel carp. The lowest detection limit was 3.02×101 copies, which was 1000 times more sensitive than conventional PCR. The intraassay and interassay coefficients of variation were below 2%. Importantly, this method could detect all life cycle stages of M. honghuensis, which included its trophozoite and presporogonic stages, distributed not only in the tissue of infected fish but also in the water column and sediments. Therefore, the developed QPCR assay has high specificity, sensitivity and repeatability, which can be applied for quantitative monitoring of all life cycle stages of M. honghuensis distributed in a culture system during the whole culture cycle of gibel carp and will be the basis for the development of targeted and precise control strategies for pharyngeal myxosporidiosis of gibel carp. NEWLY RECORDED SPECIES OF BDELLOID ROTIFER IN CHINA AND RESEARCH PROSPECTS LI Ying, WANG Qing, WEI Nan, ZENG Yue, CUI Zong-Bin, YANG Yu-Feng Bdelloid rotifers, belonging to the subclass Bdelloidea (class Eurotatoria; phylum Rotifera), have become an important group attracting scholars in various fields because of their strict parthenogenesis and anhydrobiosis. A newly recorded species of bdelloid rotifer named Otostephanos torquatus Bryce, 1913 was found in moss samples collected from the Wanshan Islands in Zhuhai, Guangdong Province, China, in 2018. A detailed morphological description of this species is currently lacking. This paper describes the main taxonomic features of O. torquatus by describing the morphology of the body and scanning electron microscopy images of its trophi. This important classification of O. torquatus is based on the food pellets and trochi, including special rings, triangular upper lips and trophi with 7/7 major teeth. Food pellets are used to store food in the lumen, and the height of the upper lip is lower than that of the trochi. By comparing the COⅠgene sequence of O. torquatus and familiar species in the same family or genus, it is shown that O. torquatus belongs to the genetic position of the genus Otostephanos. By comparing research results about bdelloid rotifers in other countries and their important position in the study of evolution, Chinese scholars should strengthen the exploration and research on bdelloid rotifers. CHARACTERISTICS OF the GONADAL TRANSCRIPTOME OF AMUR STURGEON (ACIPENSER SCHRENCKII) UNDER ARTIFICIAL CULTURE LI Ying, RUAN Rui, AI Cheng, YUE Hua-Mei, YE Huan, DU Hao, LI Chuang-Ju Sturgeon, one of the oldest chondrostei in the world, has no sexual dimorphic appearance. To understand the molecular characteristics of the gonad development of sturgeon under artificial culture, characteristics of mRNA levels between the testis and ovary of two-year-old Amur sturgeon (Acipenser schrenckii Brandt) were analyzed using transcriptome sequencing. A total of 19690 differentially expressed gene transcripts were found between the gonads, in which the sex-related genes mainly included three transcription factors (Dmrt1, Sox9 and Foxl2) and three transforming growth factors (Amh, Bmp15 and Gdf9). We found four significant enrichment KEGG pathways involved in the process of ovarian development: progesterone-mediated oocyte maturation, oocyte meiosis, ovarian steroidogenesis and the GnRH signaling pathway. The expression patterns of 18 differentially expressed genes in the ovarian steroidogenesis pathway were analyzed. The results suggest that estrogen biosynthesis is inhibited in the ovaries of two-year-old Amur sturgeon. However, androgen biosynthesis is not affected in the testis. This study provides a foundation for studies on gonadal differentiation, the development of sturgeon, and sex identification at the level of mRNA expression. REPRODUCTIVE BIOLOGY OF CARASSIUS AURATUS GIBELIO IN THE IRTYSH RIVER, CHINA LIU Cheng-Jie, ZHANG Zhi-Ming, DING Hui-Ping, XIE Cong-Xin, MA Xu-Fa The reproductive biology of Carassius auratus gibelio was studied with 546 fish collected from the lower reaches of the Irtysh River in Xinjiang Uygur Autonomous Region from April to October 2013. The overall sex ratio (F/M) was 10.84:1. The development of gonads can be divided into six periods according to their histological characteristics. The monthly variation in the proportion of gonads at each macroscopic maturity stage and gonadosomatic index (GSI) indicated that C. auratus gibelio spawns once a year and that its spawning season is from May to July, with peak spawning occurring in June. The estimated standard lengths (SL50) and ages (A50) at first sexual maturity for females and males were 161 and 135 mm and 2.3 and 1.9 years, respectively. The absolute fecundity (AF) of C. auratus gibelio was 42,453 eggs per fish, and the relative fecundity (RF) was 98.19 eggs per gram of body weight (BW). The absolute fecundity had a linear relationship with body length and a power function relationship with weight but no significant correlation with age or ovary weight. This study further enriches the biological data available for C. auratus gibelio and provides a scientific basis for the protection and sustainable utilization of C. auratus gibelio resources in the Irtysh River. STUDY OF THE EFFECT OF CLOVE OIL ON THE SIMULATED TRANSPORT OF LIVE BREAM DING Ya-tao, Yang Feng, WANG Lin-lin, SHI Wen-zheng, Wang Zhi-he The effect of clove oil on the live transportation of bream was studied in this paper. Based on a single factor experiment, the optimum conditions for preservation during transportation was determined by orthogonal experiments, and the changes in water quality indicators and their effects on fish biochemical indicators during live transportation were detected and analyzed. The results showed that the optimum conditions for preservation and transportation were an anesthetic solution concentration of 15 mg/L, a water temperature of 9°C and a fish-water ratio of 1:3. The survival time was extended by up to 50h, and the fish survival rate was 100%. In terms of the water body indexes, there were no significant changes in pH and dissolved oxygen. The ammonia nitrogen level in the anesthesia group increased from 0.049 mg/L to 4.034 mg/L within 48h. The number of microorganisms in the water increased significantly within 24h. The increase in ammonia nitrogen and the growth of microorganisms affected the survival of fish. In terms of the biochemical indexes of the fish body, the glycogen content in the muscles of the anesthesia group fish decreased significantly within 24h. The content of lactic acid increased significantly and then decreased, and the pH decreased first and then increased, indicating that glycogen consumption produced lactic acid during the transportation of live fish. Among the blood indexes of the fish body, AST activity, LDH activity, urea and creatinine (CREA) increased after 12h and increased significantly after 36h (P<0.05), and then they were decreased in the anaesthetic group. The results showed that the metabolic levels of liver and kidney tissue were affected by the prolongation of the survival time. TISSUE DISTRIBUTION OF TWO SUPPRESSOR OF CYTOKINE SIGNALING 3 (SOCS3) GENES AND THEIR ROLE IN BACTERIA-INDUCED INNATE IMMUNE RESPONSE IN GRASS CARP (CTENOPHARYNGODON IDELLA) ZHAO Shan-Shan, SUN Yuan, ZHENG Guo-Dong, CHEN Jie, ZOU Shu-Ming [Abstract](256) [FullText HTML](106) [PDF 733KB](1) The suppressor of cytokine signaling 3 (SOCS3) regulates the immune response. Here, we cloned the socs3b gene and investigated their role and distribution of socs3s in grass carp (Ctenopharyngodon Idella). Grass carp socs3b gene was 2126 bp in length, encoding 216 aa peptides. In adult fish, both orthologs were expressed in 11 tissues with slightly different among tissues. After Aerononas hydrophila injection, both socs3a and socs3b were significantly up-regulated in the liver, spleen, intestine and kidney tissues. These results suggest that socs3s may play important roles in tissue growth of grass carp and the modulation of bacteria-induced innate immune response. This study provides relevant reference for the follow-up study of the function of socs3s genes in grass carp. WHOLE GENOME SEQUENCING AND COMPARATIVE ANALYSIS OF BACILLUS PARALICHENIFORMIS FA6 STRAIN ZHAO Di, WU Shan-Gong, FENG Wen-Wen [Abstract](434) [FullText HTML](165) [PDF 3021KB](1) Previous studies have shown that Bacillus paralicheniformis FA6 strain (FA6), isolated from the intestine of grass carp, has a role in the degradation of various carbohydrates (e.g. amylase or cellulase activity). In order to study the putative mechanisms of action and explore its potential as a probiotic, we used the third-generation sequencing technology to determine its whole genome sequence. Genome assembly, gene prediction and functional annotation were performed using bioinformatics methods. Besides, we conducted structural and functional comparative genomic analyses between B. paralicheniformis FA6 and four other available Bacillus spp. genomes (two B. paralicheniformis and two B. licheniformis). Sequence analysis showed that the genome of B. paralicheniformis FA6 consists of a single chromosome, with the size of 4450579 base pairs, and the GC content of 45.9%. The B. paralicheniformis FA6 genome contains multiple food digestion-related genes, including 128 protease genes, 32 lipase genes and 72 glycoside hydrolase genes. In addition, the bacterial genome contains seven genes encoding for lantibiotics. Structural comparative analysis revealed that all five Bacillus spp. genomes share a collinear structural relationship, but genomic features of B. paralicheniformis FA6 are most similar to the two conspecifics. A comparison of metabolic pathways (KEGG) among the five Bacillus spp. strains showed that B. paralicheniformis FA6 has the largest number of genes involved in metabolic and environmental information processing. The numbers of genes in the B. paralicheniformis FA6 genome encoding cellulases, hemicellulases and amylases were 5, 7 and 5 respectively. This is higher than in other four strains, which indicates that B. paralicheniformis FA6 is better adapted for the digestion of plant cell wall polysaccharides. The results of this study indicate that B. paralicheniformis FA6 is highly adapted for the utilization of a broad range of plant metabolites, which may be a reflection of its adaptive evolution in the intestinal tract of grass carp. Moreover, the results of this study will provide a theoretical basis for the application of B. paralicheniformis FA6 as a dietary supplement to aquaculture feed. IDENTIFICATION OF GRASS CARP DENDRITIC CELLS (CTENOPHARYNGODON IDELLUS) AND REGULATORY EFFECTS OF BACILLUS SUBTILIS ON THE IMMUNE FUNCTIONS LI Si-Si, ZHOU Cheng-Chong, WANG Huan, XU Li-Li, ZHANG Shi-Yu, XIE Meng-Qi, CHEN Xiao-Xuan, WU Zhi-Xin To investigate the biological characteristics of grass carp dendritic cells (Ctenopharyngodon idellus) and the regulatory mechanisms of dendritic cells by Bacillus subtilis, we isolated and gained grass carp dendritic cells to analyze morphological features, unique biological functions and authorized surface molecular markers. The expression of immune-related cytokines of dendritic cell to Bacillus subtilis stimulation was analyzed by RT-PCR. Our results showed that DCs have classical dendritic morphological features to initiate T cell proliferation and migration ability. LPS stimulation promoted the maturation progress of dendritic cells by increasing the expression of surface molecular markers CD83 and CD80/86, suggest that grass carp DCs were morphological and functional homology to mammalian dendritic cells. The expression of anti-inflammatory factors Il-4 and Il-10 increased significantly after the stimulation of UV-killed Bacillus subtilis (P<0.05), and reached to the highest level at 12h, revealing that Bacillus subtilis can promote the expression of anti-inflammatory factors to regulate the immune functions of dendritic cell. In conclusion, these results explored the biological characteristics of teleost fish dendritic cells and characterized the influence of probiotics Bacillus subtilis on the immune functions. GENETIC DIVERGENCE AND POPULATION DIFERENCIATION ANALYSIS OF OPSARIICHTHYS BIDENS FROM YILUO HE DENG Yan, WANG Xue, LIN Peng-Cheng, LIU Huan-Zhang, WANG Xu-Zhen Often deep genetic divergence indicates potentially different species. However, exceptions are occasionally found. In this study, samples of Opsariichthys bidens from a tributary of the Yellow River (Yiluo River) were analyzed using the mitochondrial cytochrome b (Cyt b) gene, simple sequence repeats (SSR), and the method of stable isotope was used to explore their species status. Phylogenetic trees were reconstructed with the neighbor joining (NJ) and Bayesian inference (BI) methods based on the Cyt b gene sequences and two well supported clades were recovered. Shared haplotype was not detected between these two clades and the average genetic divergence of 3.1% between them seemed to reach the criterion of DNA barcoding for species identification. Nevertheless, analyses of SSR revealed no significant genetic differentiation between the clades (Fst=0.0012, P=1), with 99.88% of variation occurring among individuals. Besides, isotope composition of δ13C and δ15N demonstrated that the two clades shared a common diet with no nutrient niche separation between them. Analysis of SSR and stable isotopes indicated that the two mitochondrially well-separated clades of O. bidens could not be explained by cryptic species, although their deep genetic distances of the Cytb gene was derived from their ancestral populations. CLONING AND EXPRESSION ANALYSIS OF CATHEPSIN B GENE OF GOLDEN POMPANO (TRACHINOTUS OVATUS) ZHU Peng, HU Shu, QIAO Rui-Feng, LIAO Yong-Yan, WANG Shu-Yi, Peng Jin-Xia, LU Zhuan-Ling, WEI You-Chuan To investigate the expression and function of Trachinotus ovatus cathepsin B (TroCatB) gene in response to bacterial stimulation, TroCatB cDNA was cloned by RT-PCR and RACE technology and it was 2181 bp in length with a 391 bp 5' UTR, a 797 bp 3'UTR and an 993 bp ORF encoding of 330 amino acid residues. The estimated molecular mass and theoretical isoelectric point were 36.37 kD and 5.73, respectively. The TroCatB protein has a signal peptide (1Met-18Ala), a precursor peptide (25Leu-64Gly) and a typical papain family cysteine domain, containing 107Cys, 277His, 297Asn catalytic activity sites. Homology analysis showed that the homology of the TroCatB protein with other vertebrates was 67.0—90.9%, and the homology of the mature peptide region with other vertebrates was 73.7—92.4%. The N-J phylogenetic tree revealed that the scorpionfish and other fish clustered together, closest to the corpus callosum. Real-time quantitative PCR indicated that TroCatB mRNA expressed in various tissues with the highest level in spleen. Vibrio alginolyticus infection significantly induced the expression of TroCatB gene in spleen with peak level at 6h and head kidney tissues (P<0.05) with peak level at 12h. These results indicated that the domain and catalytic active sites of TroCatB protein are conserved during genetic evolution. The TroCatB gene regulates the physiological activities of the organism against bacterial immunity, and plays an important role in the innate immune defense of ovate to elucidate the function of TroCatB in the immune process and the pathogenesis of pathogens. THE ESTABLISHED KIDNEY CELL LINE FROM EUROPEAN EEL ANGUILLA ANGUILLA AND ITS SUSCEPTIBILITY TO ANGUILLID HERPESVIRUS LI Miao-miao, WU Bin, LIN Nan, WANG Xiao-wei, JIANG Xiao-bin, LIN Guo-qin, FAN Han-ping To investigate the epidemiology and etiology of eel viral diseases, and isolate, cultivate and identify the eel virus, a new cell line (EEK) was established by tissue explant method from european eel anguilla anguilla. EEK cells were fibroblast-like cells and were maintained and subcultured 38 generations over a 12-months period. The optimization of medium, serum concentration and temperature was conducted. EEK cells can grow and proliferate normally in DMEM/F12 and L15, but cannot proliferate in MEM. The growth rate enhanced with the increased Fetal Bovine Serum (FBS)in the range of 5%—15%, and it decreased with more than 20% FBS and less then 5% FBS. The cells grew well in a temperature range of 22—27℃, but the growth rate reduced at the temperature below 17℃ or above 32℃. The virus sensitivity test revealed that the cell line was susceptible to the infection of Anguillid herpesvirusv (AnHV) and has obvious cytopathic effect. The establishment of a kidney cell line from european eels has increased the variety of fish cell lines, providing important experimental material for the diagnosis of eel viral diseases, the study of viral etiology and the development of virus vaccine. EFFECTS OF DIETARY PROTEIN LEVELS ON GROWTH PERFORMANCE, BODY COMPOSITION AND SERUM BIOCHEMICAL INDICES OF JUVENILE SINILABEO DECORUS TUNGTING (NICHOLS) CHENG Xiao-Fei, LI Chuan-Wu, ZOU Li, JIANG Guo-Min, ZENG Guo-Qing, LI Jin-Long, LIU Ming-Qiu, LIANG Zhi-Qiang, XU Yuan-Qin The study was conducted to evaluate the effects of dietary protein on condition factors, body composition and serum biochemical indices of juvenile Sinilabeo decorus Tungting(Nichols).Five isolipidic and isoenergetic diets were formulated containing 32.57%, 37.58%, 42.76%, 47.83% and 52.22%, respectively. A total of 450 juvenile S. decorus Tungting with the average initial weight of (14.10 ± 1.08) g were randomly distributed into five groups with three replicates per group containing 30 juvenile S. decorus Tungting per replicate for 60 days. The results indicated that the survival ratio (SR) and the feed conversion ratio (FCR) had no significant differences among groups (P>0.05). Protein efficiency declined with increased dietary protein levels (P<0.05). The weight gain rate (WGR) and the specific growth rate (SGR) of 52.22% group were significantly lower than those of 32.57% and 42.76% groups (P<0.05). The condition factor (CF) and the intestine length ratio (ILR) decreased with increased dietary protein levels, in which 52.22% group were significantly lower than 32.57% group (P<0.05). There was no significant difference in body chemical composition, except muscle crude protein and ash. With the increase of dietary protein level, the muscle crude protein increased. Seventeen amino acids were found in fish muscle. The contents of Met, Ile,Leu,Asp, Ser,Glu, Ala, Arg, ƩEAA, ΣDAA and ΣTAA in dorsal muscle of 32.57% group were significantly lower than those in 52.22% group. With the increase of dietary protein level, the triglyceride (TG) and very low-density lipoprotein cholesterol(VLDL-c) in serum were increased, except the 37.58% group. The optimum level of dietary protein for juvenile S. decorus Tungting is estimated to 37.58%—42.76% under the current experimental condition. Broken-line of SGR estimated that protein requirements for the maximum growth was about 42.91% for juvenile S. decorus Tungting. REDESCRIPTION OF MYXOBOLUS WULII AND COMPARISON OF ITS STRAINS IN DIFFERENT SECTIONS OF THE YANGTZE RIVER BASIN YANG Cheng-Zhong, ZHANG Diao-Diao, ZHAO Yuan-Jun , Available online , doi: 10.7541/2019.128 [Abstract](45) [PDF 795KB](2) This study re-described Myxobolus wulii (Wu & Li) Landsberg & Lom, 1991 and compared its strains in different sections of Yangtze River Basin (Chongqing strain, Hubei strain and Jiangsu strain) based on morphological and molecular (18S rDNA) data. The results showed that the spores and polar capsules of Chongqing strain were slightly smaller than those of Hubei strain, and that polar capsules and spores of Chongqing strain were equal in size while they were unequal for Hubei strain. The similarities and genetic distances among the three strains of M. wulii were 99.2%—99.9% and 0.002—0.007, respectively. Phylogenetic analysis showed that the sequences of M. wulii formed a poor geographical structure as well as poor host-original structure, but a strong parasitic site structure. The sequences of M. wulii clustered into two lineages: gill-parasites clade and hepatopancreas-parasites clade, indicating that the individuals of M. wulii have closer relationship with the same site of host. Moreover, the gill-parasites clade diverged earlier than hepatopancreas-parasites clade, which might be related to the evolution of external-parasites to internal-parasites. These data inferred that the gill-parasites population might be the earliest migrates of M. wulii. SPATIAL DISTRIBUTION PATTERN OF SARGASSUM VACHELLIANUM IN COASTAL WATERS OF NORTHERN ZHEJIANG TYPICAL ISLANDS ZHANG Ya-Zhou, BI Yuan-Xin, WANG Wei-Ding, SUI You-Zhen, LU Kan-Er, FENG Mei-Ping, LIANG Jun, ZHOU Shan-Shan [Abstract](1434) [FullText HTML](480) [PDF 951KB](0) In order to prove up the spatial distribution patterns of Sgassum vachellianum in Northern Zhejiang nearshore typical reefs, investigations were conducted on three reef belts with 12 reefs from the end of May to the beginning of June, 2016. Researchers carried out vessel-mounted observations and collected samples with SCUBA (Self-Contained Underwater Breathing Apparatus) to find out and compare the differentia of horizontal and vertical distribution features of S. vachellianum in adult stages. The results showed that (1) at the area scale, water environment with high turbidity and high wave energy inhibited the growth and spread of S. vachellianum which could merely be found on the second narrow reef belt. According to the growth characteristics of minimum appropriate water temperature with 10 ℃, it can be inferred that the northernmost island of Zhoushan Archipelago is the northernmost spread limit of endemic S. vachellianum to China. (2) At the site scale, the factor of wave exposure might explain the reason why S. vachellianum were sparse in southeast and were rich in northwest of reefs. The average height of S. vachellianum was only 26.3 cm on the four reefs of the second reef belt, which indicates high wave energy is not suitable for its growth. (3) Within the site scale, the North Yushan Island on the second reef belt was with lowest turbidity so that S. vachellianum were widespread, which could even be found in depth of 6.4 m. However, coastal waters with high turbidity withheld vertical distribution of S. vachellianum. As the result showed the average height of S. vachellianum decreases with depth, which can be inferred that S. vachellianum were intolerance of intense light, but light is a major factor for its distribution. Compared with the vertical distribution of Sargassum horneri in this region, S. vachellianum were adaptive to the environment with high turbidity and high sediment. Therefore, the change of inhibited water environment has driven S. vachellianum to the edge of extinction. By studying the causes of the spatial distribution of S. vachellianum, the decline of algae fields and the trend of variation, the conclusion can be drawn that S. vachellianum are suitable for artificial transplantation in Northern Zhejiang coastal area. The findings can provide meaningful references for the protection and restoration of algae fields. EFFECTS OF ENZYMATIC HYDROLYZED SOYBEAN MEAL ON GROWTH PERFORMANCE, LIVER FUNCTIONAND METABOLISM OF LARGEMOUTH BASS (MICROPTERUS SALMOIDES) ZHANG Gai-Gai, LI Xiang, CAI Xiu-Bing, Zhang Sheng-Xin, HUA Xue-Ming, HUANG Zhong-Yuan, LI Ning-Yu, YAO Jing-Ting [Abstract](979) [PDF 930KB](1) To investigate the effects of enzymatic hydrolyzed soybean meal(ESBM)on growth performance, liver function and metabolism of largemouth bass (Micropterus salmoides), five isonitrogenous and isoenergetic diets were formulated by replacing fish meal with a plant-based protein source compound (enzymatic hydrolyzed soybean meal: corn gluten meal=10 鲶1) for a 67-days trial. The additions of ESBM in the feed were 0 (E0), 15% (E15), 20% (E20), 25% (E25), 30% (E30) to replace 0, 23.64%, 30.91%, 40%, 47.27% of the fish meal, respectively. In additional, two diets of soybean meal (SBM) and fermented soybean meal (FSBM) were used as the substitute for 20% ESBM, respectively. The results showed that the specific growth rate and weight gain rate of E25 and E30 were significantly higher than those of other groups (P<0.05), and that the feed coefficient ratio of each group had no significant difference (P>0.05). The survival rates of the E25 and E30 groups were lower than other groups. With the increase of ESBM, the viscerosomatic index, hepatic index and the lipid content of body decreased significantly (P<0.05). The specific growth rate of FSBM group was significantly lower than that of E20 group (P<0.05), and viscerosomatic index and hepatic index of FBSM were significantly higher than those of SBM and E20 (P<0.05). The activities of amylase and lipase of intestines increased first and then decreased with the increased ESBM, which were significantly higher than that of the control group (P<0.05). The activity of pepsin in E20 and E30 group was significantly higher than control group (P<0.05). The activity of amylase in FSBM group was significantly higher than SBM and E20 (P<0.05), while the change of intestinal lipase activity was opposite. The activities of liver total-antioxidant capacity (T-AOC), glutamic oxaloacetic transaminase (AST) and alanine aminotransferase (ALT) in all group except liver ALT in E30 group were significantly induced by ESBM (P<0.05), and ESBM significantly reduced liver malondialdehyde (MDA) (P<0.05). The liver MDA content and ALT activities of FSBM group was significantly lower than SBM and E20 group (P<0.05), and the liver AST activity was in the order of E20>FSBM>SBM. The oxygen consumption increased significantly with the increased ESBM, and the nitrogen excretion in E20, E25 and E30 were significantly higher than E0 group (P<0.05). The muscle nitrogen retention rates of E20 and E30 were higher than other groups. ESBM had significant effects on serum free fatty acid (NEFA), total cholesterol (T-CHO), triglyceride (TG), and low density lipoprotein cholesterol (LDL-C) of largemouth bass. Among E20, FSBM and SBM groups, the oxygen consumption in FSBM group was significantly lower than other two groups (P<0.05), while nitrogen excretion was opposite. The serum T-CHO of SBM group was significantly higher than other two groups (P<0.05), while serum LDL-C and muscle lipid were the contrary trend. The serum TG of E20 group was significantly lower than other two groups (P<0.05). These results indicate that the addition of ESBM up to 30% did not harm the growth, and reduced liver oxidative stress to enhance nutrients metabolism. ALL FSBM, SBM and ESBM have benefits to replace 30.91% fish meal with the best effects by ESBM. Display Method: | 2020, 44(1) [Abstract](3) [PDF 12850KB](1) cover[J]. ACTA HYDROBIOLOGICA SINICA. [Abstract](2) [FullText HTML](1) [PDF 535KB](1) contents[J]. ACTA HYDROBIOLOGICA SINICA. 2020, 44(1): 1 -9 doi: 10.7541/2020.001 [Abstract](1034) [FullText HTML](384) [PDF 678KB](16) Miniature inverted-repeat transposable elements (MITEs), a group of short and non-autonomous DNA transposons, are widely present in eukaryotic genomes. The location of the MITEs distribution would affect the host. In this study, MITEs in 33 fish genomes of the agnatha, chondrichthyes, sarcopterygii and actinopterygii were predicted and analyzed using bioinformatics approach. Ultimately, 2433 MITEs were identified in 33 fish genomes. MITEs content in 33 fish genomes varied from 0.11% to 21.18%, and MITEs content was positively correlated with fish genome size. MITEs were classified into 10 super families according to different terminal inverted repeats (TIRs) and target site duplications (TSDs) with the largest family of TC1-Mariner superfamily. The insertion of MITEs into the fish genome was mainly happened 4 million years ago to now, and most species had an explosive expansion between 2—0.5 million years ago. A number of fish MITEs were inserted into or near the genes, which may play an important role in the regulation of gene expression. HU Jing-Wen, SHAO Feng, ZHAO Lian-Peng, HAN Ming-Jin and PENG Zuo-Gang. IDENTIFICATION OF MITE TRANSPOSONS IN 33 FISH GENOMES[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.001. ROLE OF ENDOPLASMIC RETICULUM STRESS IRE1 PATHWAY IN HEPATOCYTE APOPTOSIS OF GRASS CARP CTENOPHARYNGODON IDELLA INDUCED BY SODIUM NITRITE CHEN Si-Qi, XIE Li-Xia, YAO Chao-Rui, LI Da-Peng, TANG Rong 2020, 44(1): 10 -19 doi: 10.7541/2020.002 [Abstract](1202) [FullText HTML](339) [PDF 1257KB](2) Nitrite, a common pollutant in aquaculture, is an intermediate product of nitrogen cycle in ecosystem. To explore the mechanisms of sodium nitrite-induced cell apoptosis, grass carp liver cell (L8824) were exposed to four concentrations of sodium nitrite (0, 5 mg/L, 20 mg/L and 50 mg/L) with or without treatments of phosphoinositide receptor antagonist 2-APB and IRE1 inhibitors STF-083010. Cell apoptosis related gene expression of jnk, bcl-2, bax, caspase9, caspase3, ire1α, xbp1s and grp78 and the cytoplasmic calcium ion concentration were assessed. The results showed that nitrite significantly increased the apoptosis rate, cytoplasm calcium ion concentration and mRNA levels of jnk, bax, caspase9, caspase3, ire1α, xbp1s and grp78 and significantly decreased bcl-2 mRNA level, which were reversed by the STF-083010 treatment. Besides, both 2-APB and STF-083010 reduced the sodium nitrite-induced cytoplasmic calcium ion. These results indicate that endoplasmic reticulum stress-related IRE1 pathway plays pivotal role nitrite-mediated L8824 cell apoptosis and calcium dyshomeostasis. CHEN Si-Qi, XIE Li-Xia, YAO Chao-Rui, LI Da-Peng and TANG Rong. ROLE OF ENDOPLASMIC RETICULUM STRESS IRE1 PATHWAY IN HEPATOCYTE APOPTOSIS OF GRASS CARP C. IDELLA INDUCED BY SODIUM NITRITE[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.002. MOLECULAR CLONING, EUKARYOTIC EXPRESSION AND FUNCTION STUDY OF FTR56 FROM DANIO RERIO KUANG Ming, LIU Wan-Meng, YAO Jian, HUO Shi-Tian, LIU Xue-Qin To understand the role of zebrafish finTRIM in antiviral innate immunity, the zebrafish ftr56 gene was cloned and analyzed for its effect on the proliferation of spring viremia of carp virus (SVCV). Primers were designed according to the zebrafish FTR56 sequence. The FTR56 CDS region was amplified by PCR and ligated into the eukaryotic expression vector pcDNA4.0-His to construct the eukaryotic expression plasmid pcDNA4.0-FTR56-His and conducted bioinformatics analysis. Real-time quantitative PCR (qRT-PCR) was used to detect the expression of FTR56 mRNA in SVCV-infected zebrafish embryo fibroblasts (ZF4). Phylogenetic tree analysis showed that the zebrafish FTR56 was individually clustered. The amino acid sequence alignment showed that the similarites with TRIM56 of chimpanzees, cattle and mice were 22%—23%. The FTR56 secondary structure has one RING finger domain, one B-box domain, one coiled-coil region and one B30.0 domain. The FTR56 mRNA level increased significantly at 24h after SVCV infection. After overexpression of FTR56, the mRNA and protein levels of G protein of SVCV reduced significantly at 12h and 24h compared with the control group associated with significantly decreased SVCV titers in the culture supernatant, indicating that FTR56 inhibited SVCV proliferation. This study provide a reference for further revealing the immunoregulatory mechanism of finTRIM in fish viral diseases. KUANG Ming, LIU Wan-Meng, YAO Jian, HUO Shi-Tian and LIU Xue-Qin. MOLECULAR CLONING, EUKARYOTIC EXPRESSION AND FUNCTION study OF FTR56 FROM DANIO RERIO[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.003. AN EFFECTIVE METHOD FOR CREATING ALLO-OCTOPLOIDS BY INTEGRATING EXOGENOUS SPERM GENOME INTO GIBEL CARP EGGS LI Zhi, LU Meng, ZHOU Li, WANG Zhong-Wei, LI Xi-Yin, WANG Yang, ZHANG Xiao-Juan, GUI Jian-Fang [Abstract](1059) [FullText HTML](266) [PDF 1313KB](11) Gibel carp (Carassius auratus gibelio Bloth) possesses a special ability to integrate exogenous sperm genome or chromosome fragments into its eggs for co-development, but the spontaneous formation probability is very low in allogynogenetic offspring. In this study, white crucian carp (C. cuvieri) sperm were treated by 0.25%, 0.5%, 1%, 2% and 4% trypsin solution for 10min, or by 1% trypsin solution for 5, 10, 15, 20 and 25min respectively, and then fertilized with mature eggs of allogynogenetic gibel carp clone A+. By comparing the changes of sperm structure and motility between control group and different treatment groups, also considering reproductive indexes, such as fertilization rate, hatching rate and survival rate, as well as occurrence rate of allo-octoploids, an effective method was established to integrate exogenous sperm genome into allogynogenetic gibel carp eggs to create allo-octoploids. The average survival rate and octoploid rate were (2.4±0.7)% and (16.3±0.5)% respectively when the eggs of gibel carp clone A+ were fertilized with white crucian carp sperm treated by 1% trypsin solution for 15min. The effective method developed in this study will be a valuable way for creating novel genetic resources with excellent economic traits in gibel carp genetic breeding. Subsequently, 57 allo-octoploid adults were obtained by batch processing and flow cytometry screening from 6-month-old offspring, which can be used as core population for breeding of novel gible carp varieties with faster growth and or higher disease-resistance. LI Zhi, LU Meng, ZHOU Li, WANG Zhong-Wei, LI Xin-Yin, WANG Yang, ZHANG Xiao-Juan and GUI Jian-Fang. AN EFFECTIVE METHOD FOR CREATING ALLO-OCTOPLOIDS BY INTEGRATING EXOGENOUS SPERM GENOME INTO GIBEL CARP EGGS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.004. MORPHOLOGY AND COⅠGENES OF DIFFERENT SOURCES OF HUCHO TAIMEN IN THE IRTYSH RIVER BASIN GUO Ai-Min, Kadirdin Alken, JIAO Li, HAO Cui-Lan, CHEN Lin, PAN Guo-Qiang, XIE Zhi-Sheng, ZHANG Wen-Run, RONG Meng-Jie, YUE Cheng [Abstract](349) [FullText HTML](51) [PDF 598KB](0) Hucho taimen belong to endangered species. It is distributed in the Irtysh River basin and the Heilongjiang River basin, but the geographic isolation has existed between two basins chronically. Differences in the morphology and anatomy of Hucho in the two basins have been reported, yet the differences in gene sequences remain unclear. Several Hucho were collected from natural waters (Hucho BHB and Hucho BEJ), and introduced from Heilongjiang (Hucho HLJ). The morphological characteristics were described. The COⅠ gene was amplified and sequenced, and the phylogenetic tree was constructed by downloading the COⅠgenes from GenBank. The results indicated that Hucho BEJ and Hucho HLJ had the same morphological characteristics, and Hucho BHB differed from the other two species on body color and spot size. The phylogenetic tree based on the COⅠgene showed that the three different Hucho and H. taimen were clustered into one large branch, and that Hucho BEJ, Hucho HLJ and Russia's Amur River (called the Heilongjiang River in China) were gathered into a common group, and Hucho BHB was another separate group. Based on the genetic matrix of the COⅠgene, Hucho BEJ and Hucho HLJ had similar genetic distance with the Russian Amur River (0—0.0044), and Hucho BHB was far away from Hucho BEJ, Hucho HLJ and Hucho of Russian Amur River (called the Heilong River in China) (0.0057—0.0082). These results indicated that Hucho BHB in morphology and gene sequence is quite different from Hucho HLJ and Russia's Amur River (called the Heilongjiang River in China), which maybe belong to different ecological and geographical types. The almost identical morphology and gene sequence between Hucho BEJ and Hucho HLJ demonstrated that Hucho BEJ was a species released into the river or escaped from the farm. GUO Ai-Min, Kadirdin Alken, JIAO Li, HAO Cui-Lan, CHEN Lin, PAN Guo-Qiang, XIE Zhi-Sheng, ZHANG Wen-Run, RONG Meng-Jie and YUE Cheng. MORPHOLOGY AND COⅠGENES OF DIFFERENT SOURCES OF HUCHO TAIMEN IN THE IRTYSH RIVER BASIN[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.005. THE EXPRESSION OF MYOSTATIN PROPETIDE IN BACILLUS SUBTILIS AND ITS EFFECT ON THE GROWTH OF MACROBRACHIUM NIPPONENESE ZHANG Xiao-Dong, SHEN Wen-Ying, REN Gang Myostatin propetide (MSTNpp) of Macrobrachium nipponenese was expressed in Bacillus subtilis and its effect on the growth and creatine kinase activity was investigated. The MSTNpp gene sequence was optimized to synthesize BsMSTNpp according to the B. subtilis preference codon, and the recombinant expression plasmid pGJ105-BsMSTNpp was obtained. After the transformation and fermentation of recombinant B. subtilis, the supernatant was collected and identified by Western blot. The results showed that the molecular weight of the recombinant protein was 36.0 kD. The level of recombinant BsMSTNpp increased over time, and the expression level at 100h was 10 times than that in 24h. To verify the biological activity of the recombinant BsMSTNpp, the healthy freshwater shrimp with an average body weight of 1.52 g and average body length of 4.55 cm were randomly divided into 4 groups with 3 replicates per group and 200 shrimps per replicate for a 30d trial. The experimental group 1 and experimental group 2 were fed the basal diet supplemented with recombinant pGJ105-BsMSTNpp at the dose of 0.5×106 CFU/g and 1.0×106 CFU/g respectively. The control group 1 was fed the basal diet, and the control group 2 was fed the basal diet with pGJ105 at the dose of 1.0×106 CFU/g. The results showed that the growth ratios of the experimental groups were significantly higher than that of the control group 1 (P<0.05), and there was no significant difference between experimental groups and the control group 2. The creatine kinase activities of the experimental groups were higher than those of control groups (P<0.05). The results revealed that recombinant BsMSTNpp could enhance the activity of creatine kinase to improve the myocyte proliferation and differentiation to increase the muscle growth rate of Macrobrachium nipponenese. The results provide technical support to study the function of MSTNpp and its application in shrimp aquaculture. ZHANG Xiao-Dong, SHEN Wen-Ying and REN Gang. THE EXPRESSION OF MYOSTATIN PROPETIDE IN BACILLUS SUBTILIS AND ITS EFFECT ON THE GROWTH OF MACROBRACHIUM NIPPONENESE[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.006. THE EDNA COLLECTION METHOD OF ZHOUSHAN COASTAL WATERS CHEN Zhi, SONG Na, MINAMOTO Toshifumi, WU Qian-Qian, GAO Tian-Xiang This study used Sepiella japonica as the research object to establish and optimize the acquisition method of high-turbidity water DNA (environmental DNA, eDNA) in Zhoushan offshore by absolute quantitative technique. The results indicated that the eDNA yield by ethanol precipitation method is 1.76—2.53 times higher than that of filtrating method, but the limitations of the collection volume, treatment requirement and supporting equipment make the ethanol precipitation method difficult to employ in practice. Filter screen with small aperture have no filtering effect on sediment. The size of filter aperture has a great effect on eDNA yield only when small volume water samples were collected. Precipitation treatment to water sample enhanced the yield of eDNA, but it also increased the variation of eDNA yield. Cationic surfactant significantly inhibited eDNA degradation. The effect of the membrane removal method is better than the membrane method, and it is recommended to increase the centrifugation time when the membrane removal method was used. Although the phenol extraction method can not improve the eDNA yield, it can significantly improve the purity of the product. This study is the first to establish an optimal method for obtaining eDNA of macro-organisms from Zhoushan offshore water, which provides reference for water sample collection and eDNA extraction in similar waters. CHEN Zhi, SONG Na, MINAMOTO Toshifumi, WU Qian-Qian and GAO Tian-Xiang. THE EDNA COLLECTION METHOD OF ZHOUSHAN COASTAL WATERS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.007. ISOLATION, IDENTIFICATION AND DETOXIFICATION OF A TRICHLORPHON-TOLERANT RHODOBACTERSPHAEROIDES XR12 CAO Hai-Peng, ZHANG Shu-Meng, YU Jing-Jing, AN Jian To explore the microbial resources on control trichlorphon pollution, a potential trichlorphon-tolerant bacterium XR12 was isolated and screened from the aquaculture sediment according to the physiological-biochemical characteristics and 16S rRNA gene sequence analysis, and its antibiotic resistance and its safety and detoxification effect were evaluated. The results indicated that strain XR12 exhibited the maximum tolerance concentration of 7680 mg/L trichlorphon. The strain XR12 was identified as Rhodobacter sphaeroides through phenotypic characterization and phylogenetic analysis based on 16S rRNA sequences. Its 16S rRNA sequence had homology of 98%—100% with strains of R. sphaeroides from GenBank, and showed the closest relation to R. sphaeroides strain RSF1 (GenBank accession number: KF606891). In addition, XR12 exhibited high sensitivity to kanamycin, roxithromycin, pipram, amoxicillin, florfenicol, polymyxin B, neomycin, gentamycin, ofloxacin, enrofloxacin, norfloxacin, streptomycin, tetracycline, netilmicin, intermediate sensitivity to doxycycline and resistance to bacitracin, nalidixic acid and sulfamethoxazole. XR12 had a LC50 of >109 cfu/mL for zebra fish, and could significantly enhance the LC50 of trichlorfon to zebrafish from 26.06 to 59.51 mg/L, indicating a good detoxification effect on trichlorfon. This study indicated that XR12 had the potential for trichlorfon detoxification in aquaculture water. CAO Hai-Peng, ZHANG Shu-Meng, YU Jing-Jing and AN Jian. ISOLATION, IDENTIFICATION AND DETOXIFICATION OF A TRICHLORPHON- TOLERANT RHODOBACTERSPHAEROIDES XR12[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.008. EVALUATION OF GENERAL GENETIC DIVERSITY OF FISHES FROM MIDDLE AND SMALL RIVERS ON THE EAST CHINA——TAKING CAO'E RIVER AS AN EXAMPLE REN Gang, XUAN Xin-Ling, XIE Ya-Ting, LI Bi-Ying, CHEN Min, CAI Ya-Jun, SHEN Wen-Ying The genetic diversity of fish is seriously inferred from human disturbance factors such as water pollution and habitat destruction. Recent studies on the genetic diversity of fishes in small and medium-sized rivers in eastern China have focused on simple species, but there has been less comprehensive evaluation of fish genetic diversity and its causes. In this study, Cao'e River was selected as the representation of middle and small rivers in East China to evaluate the general genetic diversities of its fishes using mitochondrial cytochrome b gene (Cyt b). The results showed that the haplotype diversity indices of Cyt b from the 21 species and 26 population were ranged from 0.074 to 0.987, and their nucleotide diversity indices were ranged from 0.00019 to 0.00520. The genetic diversities among different species were large. Comparing the genetic diversities of fishes in different sections of Cao's River, the haplotype diversity indices of fishes decreased gradually from the estuary to upstream (P<0.05). The haplotype diversity indices in species populations from Cao'e River were significantly lower than that of same species from large rivers such as the Yangtze River and Yellow River (P<0.05). The haplotype diversity indices of sensitive fishes were significantly lower than those of middle tolerance fishes (P<0.05). Both the haplotype diversity indices and nucleotide diversity indices of three species, Pseudorasbora parva, Pelteobagrus nitidus and Mastacembelus aculeatus, in the populations from the upstream sampling site of Jinling were lower than those in the populations of same species from the sampling site of middle and lower reaches, Xianyan. This result implied that the overall genetic diversities of fishes from Cao'e River lied in middle or even low level, and that water pollution and overfishing might be the mainly reasons to reduce the genetic level of Cao'e River. In summary, our results provided an important theoretical basis for the management, protection, exploitation and utilization of fish resource of Cao'e River, and even of the middle and small rivers in East China. REN Gang, XUAN Xin-Ling, XIE Ya-Ting, LI Bi-Ying, CHEN Min, CAI Ya-Jun and SHEN Wen-Ying. EVALUATION OF GENERAL GENETIC DIVERSITY OF FISHES FROM MIDDLE AND SMALL RIVERS ON THE EAST CHINA——TAKING CAO'E RIVER AS AN EXAMPLE[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.009. EFFECTS OF FISH MEAL LEVELS ON GROWTH AND IMMUNITY OF BLACK CARP (MYLOPHARYNGODON PICEUS) IN DIFFERENT CULTURE DENSITIES HU Yi, LIU Yan-Li, TIAN Qian-Qian, SHI Yong, ZHONG Lei, ZHOU Jian-Cheng To study the effects of fishmeal level and stocking density on the growth and immunity of juvenile black carp, a two-factor design of fishmeal level (10%, 20%) × culture density (50, 100, 200 tails/box) was applied. The young black carp (2.50±0.02) g were divided into 6 groups of L50, L100, L200, H50, H100 and H200 with 3 replicates in each group using reservoir cages (1.5 m×1.5 m×1.5 m). In the early stage of experiment (week 8), the weight gain rate of black carp increased at first and then decreased with the increase of the stocking density, and the weight gain rate of the H200 group was higher than that of the L200 group (P>0.05), while it was significantly lower than that of the L200 group in the later stage (week 16) (P<0.05). The survival rate of the L200 group was lower than that of the L50 group (P>0.05). The stocking density and fishmeal levels showed an interactive effect on the survival rate and the weight gain rate (P<0.05). In the early stage, levels of lysozyme (LSZ) and serum glucose (GLU) of the low-fishmeal group decreased firstly and then increased with the increase of density, and intestinal secretory immunoglobulin A (S-IgA) increased. The levels of LSZ, complement 4 (C4), immunoglobulin M (IgM), S-IgA and cortisol (COR) of the H200 group were higher than those of the L200 group, while the GLU of the H200 group was lower than that of the L200 group (P>0.05). In the later stage, the level of C4 in the high and low fishmeal groups decreased at first and then increased with the increase of density (P>0.05). In the H200 group, the IgM and COR levels were higher than those of the L200 group, and the level of GLU was lower than that of the L200 group (P>0.05). In summary, high-density culture negatively regulate the growth performance, body's immunity and anti-stress ability. The increased fishmeal level reversed the effects of high-density culture on growth performance, the immunity and anti-stress ability in the early stage. However, in the later stage, the increased fishmeal level only reduce high-density culture-mediated the survival rate without rescuing the growth performance and immune function. HU Yi, LIU Yan-Li, TIAN Qian-Qian, SHI Yong, ZHONG Lei and ZHOU Jian-Cheng. EFFECTS OF FISH MEAL LEVELS ON GROWTH AND IMMUNITY OF BLACK CARP IN DIFFERENT CULTURE DENSITIES (MYLOPHARYNGODON PICEUS)[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.010. EFFECTS OF DIETARY FISHMEAL REPLACEMENT WITH MEAT AND BONE MEAL ON THE GROWTH PERFORMANCE, BLOOD PHYSIOLOGICAL AND BIOCHEMICAL INDICES, MUSCLE CHEMICAL COMPOSITION AND TEXTURE CHARACTERISTICS IN JUVENILE FURONG CRUCIAN CARP (FURONG CARP♀× RED CRUCIAN CARP♂) CHENG Xiao-Fei, JIANG Guo-Min, XIANG Jin, SONG Rui, LI Shao-Ming, WU Yuan-An, LIU Li, WANG Zhi-Ming An 8-week growth experiment was conducted to investigate the effect of dietary replacement of fish meal protein with meat and bone meal (MBM) on growth performance, feed utilization, blood physiological and biochemical indices, muscle chemical composition and texture characteristics in juvenile Furong crucian carp (Furong carp♀× red crucian carp♂) [initial body weight of (17.47±2.56) g]. Three isonitrogenous (crude protein: 38%) and isolipidic (Crude lipid: 6.5%) diets were formulated with 0, 50% and 100% dietary fish meal protein replaced by MBM (designated FM, T1 and T2). The results showed that, no significant differences were found in weight gain rate (WGR), specific growth rate (SGR) and feeding rate (FR) among the FM, T1 and T2 groups (P>0.05), while the feed conversion ratio (FCR) of the FM group was significantly higher than that of the T2 group (P<0.05). There was no significant difference in blood physiological and serum biochemical indices, except for the hemoglobin (HGB) and aspartic amino transferase (AST). The HGB content in the T1 and T2 groups, was significantly higher than that in the FM group (P<0.05). On the other hand, AST showed a downward trend with the increasing proportion of dietary MBM, and the AST of the T2 group was significantly higher than that of the FM group. The crude lipid content of the dorsal muscle in the T1 group was significantly lower than that in the FM group. Meanwhile, with the replacement of dietary fish meal with MBM, the Asp, Glu, Gly, Ala, Val, Met, Ile, Leu, Tyr, Phe, ΣEAA, ΣDAA and ΣTAA contents of dorsal muscle were decreased, while the elasticity and adhesion of dorsal muscle were increased. In summary, MBM is an acceptable alternative animal protein source for Furong crucian carp, and 100% dietary fish meal could be replaced by MBM without significantly adverse effects on the growth of Furong crucian carp. CHENG Xiao-Fei, JIANG Guo-Min, XIANG Jin, SONG Rui, LI Shao-Ming, WU Yuan-An, LIU Li and WANG Zhi-Ming. EFFECTS OF DIETARY FISHMEAL REPLACEMENT WITHMEAT AND BONE MEAL ON THE GROWTH PERFORMANCE, BLOOD PHYSIOLOGICALAND BIOCHEMICAL INDICES, MUSCLE CHEMICAL COMPOSITION ANDTEXTURE CHARACTERISTICS IN JUVENILEFURONG CRUCIAN CARP (FURONG CARP♀× RED CRUCIAN CARP♂)[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.011. EFFECTS OF COPPER/CADMIUM STRESSES ON PHYSIOLOGICAL AND BIOCHEMICAL INDEXES AND FECUNDITY OF ORYZIAS MELASTIGMA SUN Jin-Hui, PAN Xia, XU Yong-Jian, YAN Xin, XIE Shang-Duan 2020, 44(1): 95 -103 doi: 10.7541/2020.012 [Abstract](255) [FullText HTML](49) [PDF 1578KB](1) To investigate the effects of heavy metals (copper and cadmium) on Oryzias melastigma, the relationships among indicators were analyzed. Three concentration gradients of copper and cadmium were used according to the seawater quality standard, and five physiological and biochemical indicators [lactic acid (LA), lactate dehydrogenase (LDH), testosterone (T), follicle stimulating hormone (FSH), luteinizing hormone (LH) and fecundity] were determined. The LA content of O. melastigma decreased significantly with the prolongation of copper exposure time, and increased with the increase of cadmium concentration. The activity of LDH did not change much under copper poisoning, but increased slightly with the prolongation of cadmium exposure time. The content of T increased with short-term copper treatment, but it decreased significantly with the prolongation of the time. The content of T increased with the enhanced cadmium toxic concentration and prolonged time. The content of FSH and LH decreased significantly with the increase of copper poisoning, and decreased with the increase of cadmium concentration, and the contents of both FSH and LH were closely related to T change. The fecundity of O. melastigm showed a significant downward trend with the prolongation of copper exposure time and the increased concentration. These results indicated that copper and cadmium poisoning could cause physiological and biochemical changes of O. melastigm with a pattern of gender differences, and that female fish can be selected for related pollution monitoring. SUN Jin-Hui, PAN Xia, XU Yong-Jian, YAN Xin and XIE Shang-Duan. EFFECTS OF COPPER/CADMIUM STRESSES ON PHYSIOLOGICAL AND BIOCHEMICAL INDEXES AND FECUNDITY OF ORYZIAS MELASTIGMA[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.012. THE IDENTIFICATION OF NEW TYPES OF INTERMUSCULAR BONES IN COILIA NASUS CHANG Yong-Jie, ZHOU Jia-Jia, ZHANG Li-Hong, MENG You-Lian, GAO Ze-Xia 2020, 44(1): 104 -111 doi: 10.7541/2020.013 Intermuscular bones (IBs) are common in the lower teleost and its morphology types and numbers are varied among different fish species. In this study, we documented the number, morphology, and distribution of IBs in Coilia nasus. The morphology of IBs in C. nasus was not different with Cyprinidae species; however, the distribution of IBs was quite different. Besides epineurals, epicentrals, and epipleurals, we found two others categories of IBs in C. nasus, which are located in the dorsal and ventral parts on both sides of the vertebrae. According to the reference, we called them dorsal and ventral myorhabdoi, respectively. These types of IBs were also identified in C. brachygnathus. The morphology of these IBs showed non-forked type ("1" or "("). The number of IBs in C. nasus ranged from 492 to 543, and the number of epineurals, epicentrals, epipleurals, dorsal myorhabdoiand ventral myorhabdoi ranged from 114 to 142 (\begin{document}$\bar x$\end{document}=133), 28 to 51 (\begin{document}$\bar x$\end{document}=42), 138 to 153 (\begin{document}$\bar x$\end{document}=142), 92 to 135 \begin{document}$\bar x$\end{document}=114), 66 to 98 (\begin{document}$\bar x$\end{document}=89), respectively. IBs can be stained by alizarin red, but not alcian blue. For epineurals and epipleurals, the IBs were connected with one by one through connective tissues. The study identified new categories of IBs that supplement the type of IBs in teleost. CHANG Yong-Jie, ZHOU Jia-Jia, ZHANG Li-Hong, MENG You-Lian and GAO Ze-Xia. THEIDENTIFICATION OF NEW TYPES OF INTERMUSCULAR BONES IN COILIA NASUS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.013. HISTOLOGICAL ATLAS OF SINIBOTIA REEVESAE BRAIN SHI Jin-Rong, ZHANG Qing-Lian, SHA Xiao-Yu, WANG Yong-Ming, XIE Bi-Wen Sinibotia reevesae, an endemic fish, lives only in the upper reaches of the Yangtze River. This study explored the structure characteristics of Sinibotia reevesae brain and effects of the ecological habits on the central nervous system. The results showed that S. reevesae brain was composed of five parts of telencephalon, diencephalon, mesencephalon, cerebellum and myelecephalon. The olfactory lobe was a typical spindle shape and the preoptic nucleus of the brain was arranged in a cord-like manner without large preoptic nucleus and small cell group. The corpus mamillare and parasympathetic nucleus were visible in the diencephalon, meanwhile, saccus vasculosus and inferior lobeis were well developed. There were five layers in the tectum opticum of the mesencephalon and three layers in the cerebellum. The myelecephalon located in the end of the brain and differentiated into facial lobes and developed vagal lobes. Histological observations revealed that olfactory, auditory, tactile sensation, taste, motor centers and athletic ability of S. reevesae were well-developed. To sum up, the S. reevesae mainly depend on the sense of smell, hearing, touch, and taste to forage and to evade the natural enemies. SHI Jin-Rong, ZHANG Qing-Lian, SHA Xiao-Yu, WANG Yong-Ming and XIE Bi-Wen. HISTOLOGICAL ATLAS OF SINIBOTIA REEVESAE BRAIN[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.014. This study investigated annual variations of fish communities in different sections along the longitudinal gradient of the near-natural river based on the data collected from 2007 to 2016. A total of 133 fish species, belonged to 7 orders, 20 families and 84 genera, were collected. Among these species, Acipenser dabryanus and Myxocyprinus asiaticus have been enlisted as class Ⅰ and II protected species in China, while other 36 species endemic to the upper Yangtze River. The number of fish species increased along the longitudinal gradient, which increased from 47 in Chishui Town Section to 90 in the Chishui City Section and to 120 in the Hejiang County Section. Cluster analysis and non-metric multidimensional scaling (nMDS) ordination analysis revealed that the fish communities in all sections were varied significantly over time. The relative abundance of large and medium-sized economic fishes, such as S. sinensis and O. sima, declined continuously, while the small-sized fishes such as H. labeo, S. argentatus and R. giurinus showed the opposite trends. Additionally, the abundance of some endemic fish species, such as H. tchangi and C. guichenoti, declined markedly. The induced reasons included local overfishing, navigation, channel regulation and hydropower development, as well as changes of aquatic environment in the mainstream of the upper Yangtze River. In order to effectively protect fish stocks, it is recommended to strictly manage fisheries and water activities. Further strengthen long-term monitoring and timely detect changes in fish mix structure. LIU Fei, LIU Ding-Ming, YUAN Da-Chun, ZHANG Fu-Bin, WANG Xue, ZHANG Zhi, QIN Qiang, WANG Jian-Wei and LIU Huan-Zhang. INTERANNUAL VARIATIONS OF FISH ASSEMBLAGE IN THE CHISHUI RIVER OVER THE LAST DECADE[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.015. FEEDING HABITS OF PROCAMBARUS CLARKIA AND FOOD WEB STRUCTURE IN TWO DIFFERENT AQUACULTURE SYSTEMS ZHOU Zheng, MI Wu-Juan, XU Yuan-Zhao, SONG Qing-Yang, BI Yong-Hong In order to explore the food composition of Procambarus clarkia and the food web structure in two different systems (crayfish-only system and integrated rice-crayfish symbiosis farming (IRCSF) system), we analyzed carbon and nitrogen stable isotope ratios (δ13C and δ15N) of sources and consumers, food web structure by SIBER and the food composition of P. clarkia by SIAR. The results showed that among the 19 collected species, the δ13C value of the consumer was between –34.22‰ to –25.34‰, the δ15N value was between 2.33‰ to 8.05‰, and the trophic level was between 1.46 to 3.64. The trophic level of P. clarkia in crayfish-only system was higher than that in IRCSF system. The metrics of food web reflected that the isotope niches in two systems using P. clarkia were similar. In IRCSF system, the trophic diversity in the food web was higher than that in crayfish-only system, and the niche overlap of each species and trophic redundancy in food web were lower than those in crayfish-only system. The significant positive correlation between the body length/weight and δ15N value of P. clarkia mean that P. clarkia prefer animal baits in two systems. The food contribution for P. clarkia was uniform and the proportion of plant baits was higher in IRCSF system compared with the crayfish-only system. The results indicated that the transfer loss of energy from sources to P. clarkia is higher in crayfish-only system, and the P. clarkia in IRCSF system was more herbivorous. ZHOU Zheng, MI Wu-Juan, XU Yuan-Zhao, SONG Qing-Yang and BI Yong-Hong. FEEDING HABITS OF PROCAMBARUS CLARKIA AND FOOD WEB STRUCTURE IN TWO DIFFERENT AQUACULTURE SYSTEMS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.016. ISOLATION, IDENTIFICATION AND PATHOGENICITY OF THE PATHOGENIC BACTERIUM FROM CENTROPRISTIS STRIATA WU Jing, WANG Geng-Shen, LIU Min-Hai, LI Wei-Ye, WANG Wei, SHI Hui, XU Wen-Jun, XIE Jian-Jun, HE Jie To discover the causes of sick black seabass, bacterial strain ZS201807 was isolated from the cultured Centropristis striata with the symptoms of white spots in gill and visceral organs. The biochemical and physiological characteristics of the isolated strain were studied by using conventional method, such as API 20 and 16S rDNA gene sequence. The bacteria were identified as Edwardsiella tarda. The artificial infection experiment indicated that the strain was the causative agent of sick Centropristis striata. Histopathological analysis revealed that the spleen and kidney were the main target organs with serious infection, such as a large number of erythrocyte infiltration in the spleen tissue, serious blood stasis, gill filaments capillary dilation, renal tubular cavity stenosis, glomerular enlargement, epithelial cell swelling and cell cavitation. Ultrastructural pathology showed that there was a large accumulation of rod bacteria in the spleen and head kidney tissue of the sick fish. The drug susceptibility test showed that the bacterium was sensitive to 14 kinds of drugs such as ciprofloxacin (per 5 μg), tetracycline (per 30 μg) and enrofloxacin (per 5 μg), and resistant to 13 kinds of drugs such as penicillin (per 10 U), azithromycin (per 15 μg) and amikacin (per 30 μg). It was confirmed that the pathogen of the disease and death of the black seabass was slow Edwardsiella tarda. WU Jing, WANG Geng-Shen, LIU Min-Hai, LI Wei-Ye, WANG Wei, SHI Hui, XU Wen-Jun, XIE Jian-Jun and HE Jie. ISOLATION, IDENTIFICATION AND PATHOGENICITY OF THE PATHOGENIC BACTERIUM FROM CENTROPRISTIS STRIATA[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.017. ISOLATION AND CHARACTERIZATION OF AEROMONAS SALMONICIDA SUBSPECIES SALMONICIDA FROM LARGEMOUTH BRONZE GUDGEON (COREIUS GUICHENOTI) CAGE-CULTURED IN THE UPPER REACHES OF YANGTZE RIVER LONG Meng, LI Tong-Tong, JIANG Yao, ZHANG Qian-Qian, ZHANG De-Feng, ZHANG Fu-Tie, WANG Jian-Wei, LI Ai-Hua Largemouth bronze gudgeon (Coreius guichenoti) is a potamodromous and endemic fish in the upper reaches of Yangtze River. An epidemic was found in largemouth bronze gudgeon at a farm in Luzhou, Sichuan province, southwest China, at the end of March 2012. In this study, we reported the first observation of furunculosis found in largemouth bronze gudgeon. One dominant bacteria strain, YTL1, was isolated from the liver of diseased largemouth bronze gudgeon, and a series of methods including morphological observation, biochemical tests, and phylogenetic analysis of 16S rRNA and six housekeeping genes were used to identify the pathogen. The strain was finally identified as A. salmonicida subsp. salmonicida based on the results. Antimicrobial susceptibility tests were carried out by the standard Kirby-Bauer disc diffusion method to screen effective drugs for the therapy of the disease, with results showing that YTL1 was sensitive to thirteen antibiotics such as florfenicol, norfloxacin, and ampicillin, resistant to 6 antibiotics such as bacitracin, streptomycin, and kanamycin, and mid-sensitive to erythromycin. Accordingly, florfenicol was added into diets to control furunculosis in largemouth bronze gudgeon with a good result. Artificial infection experiments in grass carp fingerlings and zebrafish resulted in the similar symptoms as diseased largemouth bronze gudgeon. In conclusion, our study demonstrated that multilocus sequence typing based on six housekeeping genes is an effective method to identify A. salmonicida strains to the subspecies level, confirmed that A. salmonicida infection is one of the greatest threats to artificial breeding and aquaculture of largemouth bronze gudgeon, and has expanded the susceptible hosts of A. salmonicida subsp. salmonicida to more cyprinid fishes. LONG Meng, LI Tong-Tong, JIANG Yao, ZHANG Qian-Qian, ZHANG De-Feng, ZHANG Fu-Tie, WANG Jian-Wei and LI Ai-Hua. ISOLATION AND CHARACTERIZATION OF AEROMONAS SALMONICIDA SUBSPECIES SALMONICIDA FROM LARGEMOUTH BRONZE GUDGEON (COREIUS GUICHENOTI) CAGE-CULTURED IN THE UPPER REACHES OF YANGTZE RIVER[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.018. INDIVIDUAL BIOLOGY OF CHUM SALMON FROM SUIFEN RIVER WANG Ji-Long, LIU Wei, WANG Wei-Kun, LI Pei-Lun, YANG Wen-Bo To study the individual biology of chum salmon, 447 samples were collected in Dongning section of the Suifen River from 2012 to 2017. The results showed that the age groups of samples were 1+—5+, of which 3+ and 2+ age are dominant for females and males respectively. The relationships between body weight and body length of male and female salmon were as follows: W=0.0082×L3.0604; W=0.0076×L3.0746, which were all belonged to the type of uniform growth. Von Bertalanffy growth function was used in simulating the fork length growth of chum salmon. The fork length growth equations of male and female chum salmon aged 3+ years were Lt,F=141.64×e–0.11·(t+1.55) and Lt,M= 119.51×e–0.13·(t+1.45), respectively. The fork length growth rate of chum salmon was inversely proportional to the age of sexual maturity. The total fork length of 50% male and female individuals reaching sexual maturity (L50) was estimated by logistic moderating function with 42.15 cm for male and 51.53 cm for female. ARSS analysis revealed a significant difference between male and female individuals for L50. Results showed that the average of absolute fecundity (F) and relative fecundity (FL and FW) were 3412 eggs, 52.42 eggs/cm and 1.17 eggs/g respectively. The positive correlation between F and fork length, body weight, and gonad weight of female chum salmon was significant, while the significant positive correlation between GSI and fork length, body weight, F was found. The power exponential equation was used to simulate the relationships of F and fork length, body weight respectively, and the regression function as follow: F=0.0311×L2.7745(R2=0.638); F=1.946×W0.9374(R2=0.704). This study will provide basic information for the conservation of chum salmon. WANG Ji-Long, LIU Wei, WANG Wei-Kun, LI Pei-Lun and YANG Wen-Bo. INDIVIDUAL BIOLOGY OF CHUM SALMON FROM SUIFEN RIVER[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.019. POPULATION RESOURCES AND FISHERY MANAGEMENT POLICIES OF PTYCHOBARBUS DIPOGON IN THE YARLUNG ZANGBO RIVER YANG Xin, LI Da-Peng, SHAO Jian, XIE Cong-Xin, LIU Xiang-Jiang, HUO Bin Ptychobarbus dipogon is endemic to China and has been threatened by overfishing and biological invasion. To examine 956 individuals collected from Lhaze to Nyemo of Yarlung Zangbo River, the population resources and fishery management policies of this species were studied by using per-recruit models from October 2008 to September 2009, April 2012 to July 2012 and in March 2013 respectively. The total instantaneous annual mortality (Z) of female and male P. dipogon were 0.52/year and 0.70/year, respectively. The range of natural mortality (M) of female and male P. dipogon were 0.10—0.17/year and 0.14—0.24/year, respectively. The range of current fishing mortality (Fcur) was 0.35—0.42/year for females and 0.46—0.56/year for males. The range of spawning potential ratio of P. dipogon was 3.1%—6.7% for females and 9.8%—18.2% for males, both of which were all significantly lower than the threshold reference point (25%). These results indicated that the stock of P. dipogon had been over-exploited under the current fishery management policy. To evaluate protective effect of capture age and seasonal closure, 14 different fishery management policies were simulated. The results show that raising the fishing age to not less than 15 years or setting the season closure time from February to June can effectively protect the population of the diagonal nematode. YANG Xin, LI Da-Peng, SHAO Jian, XIE Cong-Xin, LIU Xiang-Jiang and HUO Bin. POPULATION RESOURCES AND FISHERY MANAGEMENT POLICIES OF PTYCHOBARBUS DIPOGON IN THE YARLUNG ZANGBO RIVER[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.020. REFERENCE GENE SELECTION FOR QUANTITATIVE REAL-TIME PCR NORMALIZATION IN MOINA MACROCOPA EXPOSED TO PHENOL WANG Qian, LIU Wen-Xiu, GAO Fei, WANG Lan To search suitable reference genes for normalization of quantitative Real-Time PCR (qRT-PCR) in Moina macrocopa, we tested three reference genes of β-actin, 16S rRNA and 12S rRNA by using four analysis methods: (1) expression level of the genes (cycle threshold value); (2) GeNorm; (3) NormFinder; and (4) BestKeeper. The results showed that the Ct values of the β-actin, 16S rRNA and 12S rRNA genes remained unchanged in M. macrocopa treated with different concentrations of phenol, and the order of the stability was 16S rRNA>12S rRNA>β-actin. GeNorm analysis revealed that the order of the stability was 16S rRNA=β-actin>12S rRNA. Both NormFinder and Bestkeeper software analysis demonstrated that the order of the stability was 16S rRNA>β-actin>12S rRNA. These results indicated that 16S rRNA was the best-fit reference gene for qRT-PCR in M. macrocopa, at least under phenol treatment, which provide useful information for future functional investigations of target gene expressions in M. macrocopa in response to environmental stress. WANG Qian, LIU Wen-Xiu, GAO Fei and WANG Lan. REFERENCE GENE SELECTION FOR QUANTITATIVE REAL-TIME PCR NORMALIZATION IN MOINA MACROCOPA EXPOSED TO PHENOL[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.021. MOLECULAR DIVERSITIES OF PLANKOTIC MICROBIAL EUKARYOTES IN THE PEARL RIVER AND THEIR RELATIONSHIP WITH WATER ENVIRONMENT ZHU Chang-Yu, LU Kai-Hui, YI Zhen-Zhen To analyze the molecular diversity of planktonic microbial eukaryotes as well as relationships between the community structures and physicochemical factors based on the terminal restriction fragment length polymorphism (T-RFLP), water samples were collected from 43 sites in Guangzhou Reach of Pearl River and Guangdong Reach of Xijiang River during wet season and dry season, respectively. The results revealed that water bodies of Guangzhou Reach of Pearl River and Guangdong Reach of Xijiang River were seriously polluted by nitrogen and phosphorus, and the water were in poor quality. The diversity indexes of planktonic microbial eukaryotes in Guangdong Reach of Xijiang River were higher than those in Guangzhou Reach of Pearl River. The Shannon-Wiener indexes of samples collected in wet season were lower than those in dry season. There were significant differences in community structures of microbial eukaryotes from different seasons and regions. The community structures of planktonic microbial eukaryotes in Pearl River were affected by chemical oxygen demand, permanganate index, ammonia nitrogen, total nitrogen and total phosphorus. However, correlation coefficients between community structures and physicochemical factors were different depending on seasons and regions. In addition, one T-RF and six T-RFs were selected as possible sensitive species (Cystobasidium sp. or Protostelium nocturnum) and pollution-tolerant species (Acanthamoeba hatchetti, Babesia bicornis, Blastocystis sp., Botryosphaerella sudetica, Candida caryicola, Coccomyxa simplex, Cryptomonas ovata, Filos agilis, Stenophora robusta, Sulfonecta uniserialis and Theileria sp. etc.), respectively. ZHU Chang-Yu, LU Kai-Hui and YI Zhen-Zhen. MOLECULAR DIVERSITIES OF PLANKOTIC MICROBIAL EUKARYOTES IN PEARL RIVER AND THEIR RELATIONSHIP WITH WATER ENVIRONMENT[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.022. MORPHOLOGY AND PHYLOGENY OF TWO NEWLY RECORDED CILIATES (PARAMECIUM PRIMAURELIA AND TETRAHYMENA MIMBRES) FROM TIBETAN HOT SPRINGS IN CHINA JIANG Chuan-Qi, GU Si-Yu, AN Rui-Zhi, BA Sang, MIAO Wei We used living observation, protargol impregnation and silver staining methods to investigate the nuclei morphology and position, infraciliature, and oral apparatus of two oligohymenophorean ciliates: Paramecium primaurelia and Tetrahymena mimbres, which collected from hot springs of Tibet Autonomous Region. SSU rDNA and COXⅠ genes of these two species were sequenced, and the phylogenetic analysis revealed that P. primaurelia is clustered in Paramecium aurelia complex, and T. mimbres is clustered in Tetrahymena borealis group. These two species are newly recorded in China from the hot springs of Tibet Autonomous Region. The study of these two newly recorded species not only provides new methods and insights for the discovery of protozoa resources in the Tibetan hot springs, but also provides basic information for the study of protozoa environmental adaptation. JIANG Chuan-Qi, GU Si-Yu, An Rui-Zhi, Ba sang and MIAO Wei. MORPHOLOGY AND PHYLOGENY OF TWO NEWLY RECORDED CILIATES (PARAMECIUM PRIMAURELIA AND TETRAHYMENA MIMBRES) FROM TIBETAN HOT SPRINGS IN CHINA[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.023. EFFECTS OF TEMPERATURES AND MICROCYSTIS AERUGINOSA TOXICITY ON LIFE-TABLE PARAMETERS OF BRACHIONUS CALYCIFLORUS YAO Hui, ZHANG Huan, WANG Song-Bo, GENG Hong To investigate the effects of two strains of Microcystis aeruginosa (microcystin-producing and microcystin-free) at different concentrations on the life table parameters of Brachionus calyciflorus, we conducted a life-table study at 25℃ and investigated the responses of life table parameters of B. calyciflorus to microcystin-producing M. aeruginosa concentrations at five temperature gradients. The results showed that both M. aeruginosa toxicity and concentration significantly mediated the net reproduction rate (R0; F=31.83, P<0.01; F=30.36, P<0.01) and intrinsic growth rate (rm; F=34.67, P<0.01; F=18.73, P<0.01) of B. calyciflorus with a significant interactive effect, and that temperature and microcystin-producing M. aeruginosa concentration had significant independent and interactive effects on the net reproduction rate (R0; F=13.51, P<0.01) and intrinsic growth rate (rm; F=12.99, P<0.01) of B. calyciflorus. Microcystin-free M. aeruginosa promoted the rotifer population and it could be used as a food source for rotifers at low concentration (1×104 cells/mL), but its food quality was low due to the lack of fatty acids and other nutrients. High concentration of Microcystin-free M. aeruginosa (1×105 cells/mL and 5×105 cells/mL) obviously inhibited the growth of rotifers because rotifers prefer microcystin-free M. aeruginosa. The net reproductive rate and intrinsic growth rate of B. calyciflorus increased significantly by increasing concentration of microcystin-producing M. aeruginosa. Moreover, high temperature (30℃ and 35℃) accelerated their reproduction and growth rate, shorten the generation time, and promoted inhibitory effect of microcystin-free M. aeruginosa on rotifers. YAO Hui, ZHANG Huan, WANG Song-Bo and GENG Hong. EFFECTS OF TEMPERATURES AND MICROCYSTIS AERUGINOSA TOXICITY ON LIFE-TABLE PARAMETERS OF BRACHIONUS CALYCIFLORUS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.024. STUDY ON PHYSIOLOGICAL REACTION OF CERATOPHYLLUM DEMERSUM TO SALINITY AND ALKALINITY STRESS LONG Yi-Nian, LU Rui, WANG Pei, LIN Li-Li, CHEN Yu-Hua, XIAO En-Rong, WU Zhen-Bin As a representative lake in Momoge Wetland Reserve, White Crane Lake is facing the risk of salinization and eutrophication.In order to slow down the salinization trend and provide research basis for the restoration of submerged vegetation and survival of species diversity of White Crane Lake, this study investigated physiological indexes of Ceratophyllum demersum in different alkalinity (0, 7, 10 and 17 mmol/L) and mixed saline-alkaline (salinity of 0.3, 0.6, 1, 2 and 4 g/L, corresponding alkalinity of 1.9, 3.8, 6.3, 12.6 and 25.2 mmol/L). The results showed that when the salinity was below 1.5 g/L, alkalinity had no effect on C. demersum. Within the range of the alkalinity gradient set in the experiment, C. demersum grown normally. Although C. demersum peroxidase (POD) and proline showed gradient change, C. demersum was still able to tolerate the alkalinity conditions below 17 mmol/L. With the increase of mixed saline-alkaline concentration, the growth of C. demersum showed a trend from flourish to decline. Under the condition of salinity of 0.6 g/L and alkalinity of 3.8 mmol/L, C. demersum had the best growth with the high-promoting and low-inhibiting ability. With the increase of salinity to 2 g/L and alkalinity to 12.6 mmol/L, C. demersum were under stress and survived partially, and the POD content increased sharply and the difference between plants was large. When salinity increased to 4 g/L and alkalinity reached 25.2 mmol/L, all C. demersum died after 21 days explained by the high concentration of mixed saline-alkaline and the low removal rate of nitrogen and phosphorus that are negatively correlated. This study provides reference for the restoration of submerged vegetation in salinized lakes. LONG Yi-Nian, LU Rui, WANG Pei, LIN Li-Li, CHEN Yu-Hua, XIAO En-Rong and WU Zhen-Bin. STUDY ON PHYSIOLOGICAL REACTION OF CERATOPHYLLUM DEMERSUM TO SALINITY AND ALKALINITY STRESS[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.025. SPECIES COMPOSITION AND DISTRIBUTION OF FLOATING MAT IN LAKE ERHAI ZHAO Ya-Xuan, QI Liang-Yu, HOU Ze-Ying, ZHONG Wen, LIU Li, WU Ai-Ping The growth and propagation of aquatic plants will be greatly suppressed if the water level rises promptly. Some aquatic plants, especially for some emergent plants, can form floating mats to avoid the effects of deep flooding. We studied the species composition and distribution pattern of floating mat to find which species could form floating mats more easily and survive in the process of increasing water level in Lake Erhai in 2017. A total of 26 aquatic species (attached to 15 families and 19 genera) were recorded. The concentrations of total nitrogen (TN), total phosphorus (TP) and total dissolved phosphorus (TDP) in the water under the floating mat were greater than those in the open water, while the concentration of dissolved oxygen (DO) was vice versa. The results showed that both species richness and biomass of the floating mat were positively correlated with the area of floating mat (P<0.01), and the mean biomass of the floating mat was not significantly correlated with the area of floating mat (P>0.05). The longest root of the floating mat was significantly correlated with the area of floating mat in summer (P<0.01) but not in winter (P>0.05). Most of the floating mats distributed in the region within an offshore distance of 60 m and the water depth less than 2 m. The area of the most floating mats was less than 600 m2 (87% for summer and 95% for winter), and the number of species in the floating mats is less than 10, the longest root in the floating mats ranged from 40 to 120 cm. The frequency and relative biomass of Zizania latifolia were the greatest among all recorded species on the floating mats in two seasons (frequency: 73.33% in summer and 66.67% in winter; relative biomass: 43.38% in summer and 41.91% in winter). These results indicated that Z. latifolia was easier to form floating mats in escape of the stress of deep water than the other emergent species to explain its sino-dominance in the emergent community. The mechanism that Z. latifolia is easier to form floating mats than the other emergent plants deserves further investigations. ZHAO Ya-Xuan, QI Liang-Yu, HOU Ze-Ying, ZHONG Wen, LIU Li and WU Ai-Ping. SPECIES COMPOSITION AND DISTRIBUTION OF FLOATING MAT IN LAKE ERHAI[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.026. APPLICATION OF LICORICE IN AQUACULTURE WANG Wen-Bo, CHEN Peng, LIU Pin, HUI Rui-Min, ZHAO Qin, DOU Ling-Ling, WANG Ping, LI Ai-Hua, ZHANG Yi-Bing Licorice (Glycyrrhiza cyrrhiza uralensis Fiseh) is known as the "king of all herbs" because of its characteristics of harmonizing various medicines. It is a common cheap medicine in China and is therefore used in aquaculture. It's main active components are polysaccharides, glycosides, alkaloids, organic acids and volatile oil. In aquaculture, the common methods of licorice administration include feeding, perfusing, injecting, soaking and sprinkling, by which way are used for immune conditioning, disease control and other purposes for aquatic animals. However, there are also some problems in use of licorice, such as unstable quality of medicinal materials, extensive use methods, less standardized prescriptions, and inaccurate naming and description. In this paper, the fishery value of licorice was reviewed. WANG Wen-Bo, CHEN Peng, LIU Pin, XI Rui-Min, ZHAO Qin, DOU Ling-Ling, WANG Ping, LI Ai-Hua and ZHANG Yi-Bing. APPLICATION OF LICORICE IN AQUACULTURE[J]. ACTA HYDROBIOLOGICA SINICA. doi: 10.7541/2020.027. NewsMore>> Journal Introduction Establishment Time:1955 Bimonthly Competent unit:Chinese Academy of Sciences Host unit:Institute of Hydrobiology, Chinese Academy of Sciences ; Chinese Society for Oceanology and Limnology Editor-in-Chief:GUI Jian-Fang CN 42-1230/Q LinksMore>> Yingcheng Studio for Language and Graphic Reservoir Fisheries Institute of Hydrobiology, Chinese Academy of Scien VPCS WANFANG DATA Copy right © 2009 Editorial Office of Acta Hydrobiologica Sinica Address:No. 7 Donghu South Road, Wuchang District, Wuhan, Hubei Province, China Tel:027-68780701 E-mail:[email protected] Supported byBeijing Renhe Information Technology Co. Ltd Technical support:[email protected]
CommonCrawl
Pareto distribution A continuous probability distribution with density $$ p( x) = \left \{ \begin{array}{ll} \frac \alpha {x _ {0} } \left ( \frac{x _ {0} }{x} \right ) ^ {\alpha + 1 } , & x _ {0} < x < \infty , \\ 0, & x \leq x _ {0} , \\ \end{array} \right. $$ depending on two parameters $ x _ {0} > 0 $ and $ \alpha > 0 $. As a "cut-off" version the Pareto distribution can be considered as belonging to the family of beta-distributions (cf. Beta-distribution) of the second kind with the density $$ \frac{1}{B( \mu , \alpha ) } \frac{x ^ {\mu - 1 } }{( 1+ x) ^ {\mu + \alpha } } ,\ \ \mu , \alpha > 0,\ \ 0 < x < \infty , $$ for $ \mu = 1 $. For any fixed $ x _ {0} $, the Pareto distribution reduces by the transformation $ x = x _ {0} /y $ to a beta-distribution of the first kind. In the system of Pearson curves the Pareto distribution belongs to those of "type VI" and "type XI" . The mathematical expectation of the Pareto distribution is finite for $ \alpha > 1 $ and equal to $ \alpha x _ {0} /( \alpha - 1) $; the variance is finite for $ \alpha > 2 $ and equal to $ \alpha x _ {0} ^ {2} /( \alpha - 1) ^ {2} ( \alpha - 2) $; the median is $ 2 ^ {1/ \alpha } x _ {0} $. The Pareto distribution function is defined by the formula $$ {\mathsf P} \{ X < x \} = 1 - \left ( \frac{x _ {0} }{x} \right ) ^ \alpha ,\ \ x > x _ {0} ,\ \ \alpha > 0. $$ The Pareto distribution has been widely used in various problems of economical statistics, beginning with the work of W. Pareto (1882) on the distribution of profits. It is sometimes accepted that the Pareto distribution describes fairly well the distribution of profits exceeding a certain level in the sense that it must have a tail of order $ 1/x ^ \alpha $ as $ x \rightarrow \infty $. [1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) [a1] N.L. Johnson, S. Kotz, "Distributions in statistics: continuous univariate distributions" , Houghton Mifflin (1970) [a2] H.T. Davis, "Elements of statistics with application to economic data" , Amer. Math. Soc. (1972) Pareto distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pareto_distribution&oldid=49651 This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Pareto_distribution&oldid=49651" TeX auto
CommonCrawl
PPI network analyses of human WD40 protein family systematically reveal their tendency to assemble complexes and facilitate the complex predictions Xu-Dong Zou1, Ke An1, Yun-Dong Wu1,2 & Zhi-Qiang Ye1 WD40 repeat proteins constitute one of the largest families in eukaryotes, and widely participate in various fundamental cellular processes by interacting with other molecules. Based on individual WD40 proteins, previous work has demonstrated that their structural characteristics should confer great potential of interaction and complex formation, and has speculated that they may serve as hubs in the protein-protein interaction (PPI) network. However, what roles the whole family plays in organizing the PPI network, and whether this information can be utilized in complex prediction remain unclear. To address these issues, quantitative and systematic analyses of WD40 proteins from the perspective of PPI networks are highly required. In this work, we built two human PPI networks by using data sets with different confidence levels, and studied the network properties of the whole human WD40 protein family systematically. Our analyses have quantitatively confirmed that the human WD40 protein family, as a whole, tends to be hubs with an odds ratio of about 1.8 or greater, and the network decomposition has revealed that they are prone to enrich near the global center of the whole network with a fold change of two in the median k-values. By integrating expression profiles, we have further shown that WD40 hub proteins are inclined to be intramodular, which is indicative of complex assembling. Based on this information, we have further predicted 1674 potential WD40-associated complexes by choosing a clique-based method, which is more sensitive than others, and an indirect evaluation by co-expression scores has demonstrated its reliability. At the systems level but not sporadic examples' level, this work has provided rich knowledge for better understanding WD40 proteins' roles in organizing the PPI network. These findings and predicted complexes can offer valuable clues for prioritizing candidates for further studies. The WD40 repeat proteins constitute one of the largest protein families in eukaryotes [1], and more than 1% of human protein-coding genes encode WD40 proteins [2]. Studies found they participated in signal transduction, transcriptional regulation, protein degradation, cytoskeleton assembly, DNA damage repair, cell cycle regulation, and so forth, leading to an understanding of their involvement in many fundamental cellular processes [1, 3, 4]. The β subunits of heterotrimeric G proteins, as the most well-known WD40 proteins, transduce transmembrane signals mediated by GPCRs [5]. A set of WD40 proteins containing F-box, as key modules in SCF-ubiquitin ligases, recognize substrates and are responsible for their ubiquitin-dependent degradation [6]. Study on their interactions with other molecules is indispensable to understand their functions. Available crystal structures have shown that WD40 domain exhibits a β-propeller structure exposing its top, bottom, and side surfaces. Through these large surfaces, they interact with other molecules and form complexes to perform their versatile functions [1, 3, 5, 7,8,9,10]. For instance, FBXW7 utilizes its top surface to interact with the substrates [11], and PALB2 interacts with BRCA2 through its side surface [12]. It is reasonable to assume that these structural characteristics can offer great potential of interactions and make them as scaffolds for complex assembling. Based on this consideration, researchers have speculated that the whole WD40 protein family may act as nodes with high connectivity (i.e., hub) in the protein-protein interaction (PPI) network [1, 4]. However, this inference has not been validated by using PPI networks directly. In addition, what roles the whole WD40 protein family play in organizing PPI networks remain unclear, and need to be elucidated for better understanding of their functions. Another crucial problem is to identify their involvements in functional complexes, but whether the information drawn from the network analyses could be utilized in the prediction of WD40-associated functional complexes is currently unexplored. To address these issues, quantitative and systematic analyses of WD40 proteins, as a whole family, from the perspective of PPI network are highly required. High-throughput approaches such as yeast two-hybrid (Y2H) and affinity purification-mass spectrometry (AP-MS) have generated large-scale PPI data sets [13,14,15], and many online databases, such as MINT [16], MIPS [17], HPRD [18], and STRING [19], have integrated comprehensive information of both high-throughput and low-throughput PPIs. The accumulation of PPI data makes it possible to construct PPI networks and to perform systematic studies based on available network analysis methods. These network analyses, focusing on either the static features such as connectivity and location or the dynamic features such as co-expression coefficients, have in fact obtained various achievements [20,21,22,23,24,25]. All of these have offered the feasibility for a network analysis on WD40 proteins. In this work, we adopted the human PPI data set from HIPPIE [26, 27] database to build two human PPI networks with different confidence levels. Using these two networks in parallel, we then analyzed the network characteristics of human WD40 proteins, including their centrality measures such as degree, the location (k-value in k-core decomposition), and the co-expression correlation coefficient between a node and its interacting partners, to help understand their roles in organizing the PPI network. Finally, we predicted WD40 protein-associated complexes based on the network topological features, and evaluated its performance. The overall pipeline of this work is illustrated in a flowchart (see Additional file 1: Figure S1). Two PPI networks with different confidence levels We curated two human PPI data sets with different confidence levels from HIPPIE database [26, 27]. One contains all the interactions from HIPPIE after data cleaning (namely ALL-PPI), while the other only consists of the PPIs with high confidence scores (namely HC-PPI, see Methods). In brief, ALL-PPI contains 229,137 interactions among 16,226 human proteins, while HC-PPI contains 66,789 interactions among 11,108 human proteins, accounting for about 29% of ALL-PPI (Table 1, see Additional file 2: Table S1 and Additional file 1: Figure S2). The network analyses were performed on HC-PPI and ALL-PPI in parallel, which ensured that we could obtain robust and consistent conclusions. This was also helpful to the evaluation of the impact on the inferences stemming from PPIs with different confidence levels. Table 1 Basic information of the ALL-PPI and HC-PPI networks There are 242 and 203 WD40 proteins in ALL-PPI and HC-PPI, respectively, and all of them are located in the main components in the constructed networks. As the main components occupy the majority of the nodes (see Table 1 and more detailed information in Additional file 2: Table S1), further network analyses have been carried out on them only. We have observed that the degrees approximate the power law distribution in both networks (Additional file 1: Figure S3), which is consistent with the well-established opinion that most biological networks follow a scale-free topology [28]. WD40 proteins tend to be hubs in human PPI networks For the whole human WD40 protein family, we directly evaluated their tendency of acting as hubs in the PPI networks. There are 123 WD40 hub proteins (with degree greater than 5, see definition in Methods) in HC-PPI network (Table 2). By considering the numbers of hubs and non-hubs in non-WD40 proteins, we have obtained an odds ratio (OR) of 1.82 (p = 3.844e-5 in a χ2 test, see Methods). The quantitative measure of the odds ratio, which is significantly greater than 1, supports the inference that the whole WD40 protein family indeed tend to act as hubs in HC-PPI network. We performed the same analysis on ALL-PPI network, and the result backs the above inference more strongly (OR = 2.83, p = 2.077e-9, see Additional file 1: Table S2). To be more stringent, we also attempted these analyses by using alternative hub definitions with different cutoffs (degree greater than 10 or 15, see Methods), and all confirmed that the WD40 family tend to be hubs (Additional file 1: Table S3). The observation that the OR value in ALL-PPI is much larger than in HC-PPI for each hub definition, indicates that the tendency of WD40s to be hubs may be underestimated when using high confidence PPIs only. Nevertheless, this tendency is significantly larger than that of non-WD40s in all scenarios, demonstrating that our inference is robust. Table 2 Number of hubs and non-hubs of both WD40 and non-WD40 proteins in HC-PPI network The definition of hub protein is controversial currently. To avoid this, we further compared their degrees directly. In HC-PPI network, the median degree of WD40 proteins is significantly greater than that of non-WD40s (9 vs. 5, fold change ~ 2, p = 2.19e-8, Mann-Whitney U test, see Additional file 1: Table S4), which again demonstrates that they possess higher preference of interacting with other proteins than non-WD40s do. Similar results were observed from the analysis on ALL-PPI (24 vs. 11, fold change ~ 2, see Additional file 1: Table S4). Based on the investigations of certain individual WD40s and their structural features, previous studies have speculated that the whole WD40 family may tend to participate frequently in molecular interactions [1, 3, 29]. In this work, directly analyzing the whole set of human WD40 family in the PPI networks has confirmed this inference systematically. In addition, our analysis has provided quantitative degrees for each WD40 protein, which could be utilized to select candidates for in-depth studies. It is well accepted that proteins with high degree in the network are often associated with important functions [20]. In HC-PPI, the top three WD40 hubs are FBW1A, FBW1B, and DDB1, whose degrees are 108, 102, and 81, respectively (Table 3). FBW1A and FBW1B, which are paralogous to each other, serve as subunits of SCF E3 ubiquitin ligases, and many studies have shown that these two genes regulate cell cycle by degrading related proteins such as Cdc25A and Wee1 [30, 31]. As a linker in DDB1-CUL4-ROC1 E3 ligase, DDB1 was predicted to interact with about 90 other WD40 proteins by sequence similarity search [32]. By degrading corresponding protein substrates, it regulates many fundamental cellular processes, including DNA repair, cell cycle, and DNA replication [33]. Table 3 WD40 proteins with high- and low-degrees in both HC-PPI and ALL-PPI network Although the whole family tends to be hubs, many individual proteins in this family have very low degrees, and are worth exploring as well. The 3 WD40 proteins with lowest degrees in both HC-PPI and ALL-PPI network are listed in Table 3. According to the database search in PubMed and UniProt [34], they have not been studied widely and their functional annotations remain limited. Interestingly, we found that they show a tissue-specific or tissue-preferential expression pattern (for definition of expression patterns, see Methods) [2, 35]. On the contrary, the three genes with top degrees are prone to express widely (Table 3). Although further confirmation of this correlation is needed, we can speculate that the widely-expressed proteins may interact with different partners in different tissues, and that combining all interactions from different tissues into the overall PPI network has resulted in the high degrees. Degree is the most simple and intuitive characteristics that describes the centrality of a node. To obtain more comprehensive understanding of their centralities, we also compared other measures including betweenness, closeness, stress, and clustering coefficient, between WD40 and non-WD40 proteins in both HC-PPI and ALL-PPI. All these comparisons have revealed consistent trends (Additional file 1: Table S4), demonstrating the WD40 family indeed tends to have higher centrality levels from multiple perspectives. WD40 proteins prefer to locate near the global center of PPI networks Proteins are hierarchically located in the PPI network, and those with high degrees may be located near the periphery or the center of the whole network [21], which are often referred to as the local center or global center (see Methods for definition), respectively. While the status of a protein to be hub or not provides valuable information for understanding its role in organizing the PPI network, whether a protein tends to be located near the global center or local center can offer additional clues. To investigate the distinct locations of WD40 proteins in human PPI network, we performed the k-core decomposition (see Methods) for HC-PPI. As shown in Table 4, HC-PPI network can be split into 21 layers, and the WD40 proteins are widely located from layer 1 to 19. The median k-value of WD40 proteins is significantly greater than that of non-WD40s (8 vs. 4, fold change = 2, p = 8.56e-10, Mann-Whitney U test). As the large k-value can indicate the preference of being located near the global center (see Methods), this result has demonstrated that this propensity of WD40 proteins is significantly higher than that of non-WD40s. We further found that the percentage of WD40 proteins in each k-core subnetwork increased almost linearly with k in a certain range (from 1 to 15 on Fig. 1, linear regression R2 = 0.95, p = 4.12e-10), further showing that the WD40 protein family is prone to enrich near the global center in a more vivid way. The same analysis was carried out for ALL-PPI network, and similar results were observed (median k-value: 20 vs. 10, fold change = 2, see Additional file 1: Figure S4 and Table S5). Table 4 WD40 proteins in different layers by k-core decomposition on HC-PPI network Percentage of WD40 proteins in each k-core subnetwork during the decomposition of HC-PPI network. The percentages were obtained by dividing the number of WD40 proteins to that of total proteins in each k-core subnetwork It has been demonstrated in the yeast PPI network that proteins located near the global center tend to be essential genes and be conserved in evolution [21]. Hence, we checked three human WD40 proteins with the largest k-values in HC-PPI (Table 4). Among them, GBLP (also named RACK1) plays roles in many cellular processes, such as translational repression, PKC signaling pathway, and so forth, and it belongs to the human essential genes reported previously [36]. MED16 is a key component of the Mediator complex which is involved in the transcription regulation of nearly all RNA polymerase II-dependent genes [37], and it is synthetically lethal when knocked out together with MED15 [38]. CORO1C, a member in the Coronin gene family, is associated to many cancers and brain development [39]. In addition, all the three genes are evolutionarily conserved in vertebrates or even in the whole eukaryotes (see Additional file 1: Table S6). Taken together, the k-core decomposition has provided information concerning the WD40 proteins' locations in the PPI network, which cannot otherwise be derived from the degrees only. These results have shown that WD40 proteins prefer to locate near the global center in organizing the network topology. By identifying WD40 proteins close to the global center, one can further mine the WD40 proteins and prioritize candidates for further investigation. WD40 hubs tend to be intramodular hubs By integrating expression data into the PPI network, previous studies defined two kinds of hubs by the level of co-expression between the hub and its interacting partners (see Methods) [23, 24], where the high level and low level indicate intramodular and intermodular hubs, respectively. These two kinds of hubs display distinctive characteristics consistent with their roles in organizing communications and functions of dynamic protein networks, e.g., the intramodular ones often serve as platforms to assemble complexes [23, 24]. We measured the co-expression levels between hubs and their partners by calculating the average Pearson correlation coefficients (PCC, see Methods) in the HC-PPI network. As expected, the average PCCs of WD40 and non-WD40 hubs are both higher than those of randomized data (Fig. 2). Furthermore, the average PCCs of WD40 hubs are significantly higher than those of non-WD40 hubs (Fig. 2), indicating that WD40 hubs have higher tendency to be intramodular than non-WD40 hubs. We observed the similar trend for the protein-level (median of average PCCs: 0.343 vs. 0.217 for WD40 and non-WD40 hubs, p = 1.7e-10, Mann-Whitney U test) and the RNA-level (0.221 vs. 0.171, p = 1.6e-4, Mann-Whitney U test) expression data in the HC-PPI network (see Additional file 1: Table S7). In addition, the difference between WD40 hubs and non-WD40 hubs is evidently larger in protein-level expressions than in RNA-level expressions (Fig. 2 and see Additional file 1: Table S7). As we are studying the interactions at the protein level, the protein-level expressions should be more proper and more direct than RNA-level expressions. Hence, the larger difference observed based on protein-level expression has further strengthened our inference concerning WD40 hubs' intramodular tendency. The similar analyses were also performed in ALL-PPI network, and led to consistent observations (see Additional file 1: Figure S5 and Table S7). Distributions of average PCCs of WD40 hubs, non-WD40 hubs, and randomized data in HC-PPI network. The solid lines in orange represent the WD40 hubs, the dotted lines in purple denote the non-WD40 hubs, and the longdash lines in blue represent the randomized data. The average PCCs are calculated by using both protein-level expression data (a) and RNA-level expression data (b) By using both protein-level and RNA-level expression data, and by using both HC-PPI and ALL-PPI network, these results have provided quantitative clues systematically to support the inference that WD40 hubs, as a whole set, are more prone to being intramodular. This information, in combination with their tendency to be hubs and to locate near the global center, has largely extended our understanding concerning their roles in organizing the PPI network. According to the previous studies on PPI networks, the intramodular hubs tend to assemble complexes [24]. Hence, analyses in this section also directly confirmed previous speculations about their tendency of acting as scaffolds. Taken together, these network analyses may indicate that one can further predict WD40-associated complexes by using the network topology. WD40-associated complex predictions Protein complex, from biological perspectives, represents a group of proteins that interact with each other at the same time and place, forming a multimolecular machine. From a topological perspective, protein complex represents a highly connected subgraph or cluster that has more interactions with each other within it and fewer with the outside of the subgraph [25]. Cliques are such a kind of highly connected subgraphs, where each pair of nodes are linked by an edge, and clique-based methods are useful for predicting complexes from network [40,41,42]. The previous sections has confirmed at the systems level that WD40 hubs tend to form complexes (Fig. 2), and the clustering coefficients, which measure the trends of nodes to form dense clusters, of WD40 proteins are also much higher than those of non-WD40 proteins (see Additional file 1: Table S4). Therefore, choosing a method simply based on finding cliques may be effective to predict WD40-associated complexes. By using the HC-PPI network, we detected 1674 maximal cliques (Additional file 3: Table S8). The size of clique ranges from 3 to 16, and many cliques are overlapped with each other. We merged the cliques to obtain a series of predicted complex sets, namely from M05 to M10, according to different levels of the overlap between two cliques (see Methods for the names of the sets). These sets contain from less than 100 to more than 1000 predicted complexes (Additional file 1: Table S9). To find out which complex set is the best, we compared them with a reference set containing 234 experimentally-identified human WD40-associated protein complexes extracted from the CORUM database [43] (see Methods and Additional file 4: Table S10). To fulfill these comparisons, we also tried different values of ω [44], which was used to determine whether a predicted complex matches one of the reference complexes (Additional file 1: Table S9 and see Methods for the definition of ω). As shown in Fig. 3, M10 matches the reference set better than other predicted sets (from M05 to M09) under all ω scores. This result suggests that using the maximal cliques without further merging can effectively predict out true WD40-associated complexes, so the following analyses are all based on M10. The number of reference complexes matched by the predicted complex sets at different ω scores. Different lines represent different predicted complex sets derived from different merging parameters. The ω at the X-axis denotes the score that determines whether a predicted complex matches a reference one. The Y-axis gives out the number of reference complexes matched by predicted ones at corresponding ω scores We also tried other well-known methods for comparison, including MCODE [44], ClusterOne [45], and MCL [46]. We found that all the three methods output much less WD40 protein-associated complexes than the clique-based method (Additional file 1: Table S11), indicating that the clique-based method is much more aggressive. Besides, when comparing the predicted complex sets by these three methods to the reference set, we found that the numbers of matched reference complexes are much less than clique-based method under almost all ω scores (Additional file 1: Figure S6), suggesting a higher sensitivity of the clique-based method. In addition to considering the matched number of reference complexes, we further adopted the maximal matching ratio (MMR) to compare these methods [45]. The MMR can measure to what extent the predicted complexes overlap with the matched reference complexes. At the setting of ω > = 0.2, as recommended by MCODE [44], we found that the clique-based method obtained similar or even better MMR (Additional file 1: Table S12), revealing that the clique-based method can detect more true complexes without sacrificing their quality. Taken together, these observations indicated that, although the clique-based method may have a high false positive rate, it can detect much more true WD40 complexes than others. In order to evaluate the impact on the prediction stemming from PPIs with different confidence, we further performed the clique-based prediction by using ALL-PPI network. It turned out that the M10 matched the reference set best, similar to the case by using HC-PPI network (Additional file 1: Figure S7). As expected, the predicted WD40 protein-associated complexes from ALL-PPI are much more than those from HC-PPI (Additional file 1: Table S13), as the incorporation of many interactions with lower confidence forms more cliques. However, the numbers of matched reference complexes based on ALL-PPI and HC-PPI are similar and are near the total number of reference complexes at the ω > = 0.2 (Additional file 1: Figure S8 and Table S13). This indicates that the clique-based method on ALL-PPI network has output too many positive predictions without contributing much to the true positive predictions, suggesting that clique-based complex prediction by using HC-PPI may be better than by using ALL-PPI. Further evaluation of the final predicted complex set We chose M10 from HC-PPI as the final predicted complex set according to the analysis above, and it matched 202, 190, 158, and 99 known WD40-associated complexes in the reference set at ω scores no less than 0.2, 0.3, 0.4, and 0.5, respectively (see Additional file 1: Table S9). As it is difficult to obtain a suitable negative control data set, it is challenging for us to evaluate the false positive of our prediction directly. However, as protein complexes are groups of proteins that exert functions at the same time and location, it is reasonable to assume that proteins within a true complex have high co-expression relationships. Therefore, we further evaluated the final predicted complex set indirectly by calculating a co-expression score for each potential complex (similar to but different from the average PCC for hub proteins, see Methods). By comparison (Fig. 4), we observed that both the predicted WD40-associated complexes and the reference complexes presented significantly higher co-expression scores than the decoy complexes, i.e., randomized protein sets (p < 2.2e-16 for both tests with protein-level or RNA-level expression data, Mann-Whitney U tests). In addition, the fold changes with the protein-level expression are both larger than 2, and those with the RNA-level expression are both larger than 1.5 (see Additional file 1: Table S14). When the co-expression scores of the predicted complexes were compared to those of reference complexes, the statistically significant difference was observed with protein-level expression data (p = 1.015e-6, Mann-Whitney U test), but the fold change of medians is only 1.09 (see Additional file 1: Table S14). With the RNA-level expression data, no statistically significant difference was even observed (p = 0.115, Mann-Whitney U tests), and the fold change of medians is only 1.04 (see Additional file 1: Table S14). Distributions of the co-expression scores of predicted WD40-associated complexes, reference complexes, and decoy complexes. The orange solid line, the blue dotted line, and the black dashed line represent the distributions of co-expression scores of predicted WD40 complexes, reference complexes, and decoy complexes, respectively. The co-expression scores are calculated by using both the protein-level expression data (a) and the RNA-level expression data (b) The above results provide several indications. First, the co-expression scores are more distinguishable by using the protein-level expression data, which meets our understanding that the protein-level expression data is more suitable for integrating into the PPI network analyses than the indirect RNA-level expression data. Second, the co-expression scores are much higher in the reference complex set than in the decoy complex set, indicating that the co-expression values do have the potential to evaluate the predicted complexes. Third and most important, the much smaller differences between co-expression scores of our predicted complexes and those of the reference complexes show the high quality of our predictions in an indirect way. Our complex prediction can provide valuable information for researchers studying WD40 proteins. For example, a predicted complex named "core_209" consists of seven proteins (Fig. 5a, see Additional file 3: Table S8), and among them, TCPA, TCPB, TCPE, TCPH, and TCPQ are subunits of the CCT chaperonin complex (CORUM complex ID: 126) [47]. The NEDD1 (WD40 protein) is not included in any complexes in the CORUM database, so the researchers studying on NEDD1 cannot obtain its complex information from CORUM but our predictions provided some. Furthermore, literature searching shows that NEDD1 was reported to localize at the centrosome and recruit γ-tubulin ring complex [48] through interacting with TBG1 (tubulin gamma-1 chain protein) [49]. And interestingly, one study has found that CCT can bind to unfolded γ-tubulin and promote its folding [50]. According to these studies, it is reasonable to propose that "core_209" might be a true complex in which the CCT bind to γ-tubulin to promote its folding, and then NEDD1 might recruit folded γ-tubulin ring complex (containing TBG1) to the centrosome. Two examples of potential WD40 protein-associated complexes. The nodes connected by dark grey lines belong to predicted complexes, whereas the nodes connected by light grey lines represent the reference complexes. Nodes in light red are shared by the predicted complex and reference complex. a the predicted complex, core_209, superimposed with the reference complex CCT complex; b the predicted complex, core_5, superimposed with reference complex 19S proteasome Another example is "core_5" (Fig. 5b), which includes a WD40 protein (PAFF1) a protease (UCHL5), and many members of the 19S regulatory complex (CORUM complex ID: 32, PA700 complex) of the 26S proteasome. The database contains no information about whether it can interact with PAAF1 and UCHL5, but the predicted "core_5" suggests this possibility. Literature searching shows supporting evidences: The 19S regulatory complex recognizes poly-ubiquitinated proteins, recruits UCHL5 (a deubiquitinase) to removes the ubiquitins, and translocates them to the 20S core particle for degradation [51, 52]; PAAF1 interacts with the 19S regulatory complex, and destabilizes the association between the 19S complex and the 20S core [53], serving as a negative regulator of the 26S proteasome. Based on these clues, it is reasonable to propose that both UCHL5 and PAAF1 can bind the 19S regulatory complex to form a larger one. These examples demonstrate that our complex predictions based the network topology, in combination with literature mining, can provide informative clues to propose putative functions of WD40 proteins. Network-based approaches have been applied to protein studies in recent years. During the last two decades, various methods and theories have been accumulated for biological network analyses concerning relationships between network features and proteins' functions [20, 21, 23, 28, 54, 55]. Scardoni et al. discussed several topological centrality properties as well as their biological significances [56]. Highly connected proteins in a yeast interactome are found to tend to be essential [20], and the central located proteins are proposed more likely to be essential [21]. These established analysis strategies and the corresponding findings, in combination with the fast accumulated PPI data in online databases, make it possible to interrogate the distinct network characteristics of a specified protein set, such as the WD40 protein family. The WD40 proteins are abundant in eukaryotes [57], and studies have suggested that they may expand into a large family in the evolutionary early stage of eukaryotes through the duplication events acted on the whole domain or protein [2]. In prokaryotes, a substantial proportion of WD40 proteins have been speculated with late origin through duplication events acted on the repeat level, although the total number of prokaryotic WD40s is much less than eukaryotic ones [58]. The reason why this family is prevalent in proteomes may stem from their structural and functional characteristics. According to the crystal structures of certain family members, the WD40 protein family is assumed to participate in protein-protein interactions and complex assembling, but there was no systematic confirmation. In this work, we have performed the first systematic and quantitative network analyses on human WD40 proteins. First, this work has shown that human WD40 protein family, as a whole, tends to be intramodular hubs and be located near the global center, leading to a better understanding concerning their roles in organizing the PPI networks. Second, we have provided quantitative measures for each WD40 protein concerning its network properties, such as degree and k-value, which can serve as clues to prioritize certain candidates for in-depth studies. On the other hand, these quantitative measures for each protein also provided information that could not otherwise be obtained from the overall tendency. For example, we found many non-hub WD40 proteins with very low connectivity, such as DC121, DC4 L1, DEND3, EMAL5, and TBL1Y. Using only degrees, we cannot distinguish hub proteins located near the global center from those at the periphery, and the k-core decomposition can complement this deficiency. The k-core decomposition has demonstrated that WD40 proteins prefer to be located close to the global center of the PPI network, but not the local centers. The fact that the three WD40 proteins (MED16, GBLP, and COR1C) with top k-values are not the same as the three WD40s with top degrees, further shows the k-core decomposition has indeed added more information from another dimension. In addition to static topological properties, the dynamic feature describing the average PCC between a hub and its interacting partners was also attempt, and it has revealed that WD40 hubs should tend to be intramodular, which quantitatively confirmed the previous inference that most WD40 proteins, if not all, should participate in various protein complexes. Inspired by this, we further predicted WD40-associated complexes from the topology of the human PPI network by using a simple clique-based method and three other well-known predictors. The comparison has revealed that, although the clique-base method may have a higher false positive rate, it can give out many more putative complexes with relatively high co-expression scores, which can serve as indicators of low false positive rates. The predicted novel complexes can also provide valuable clues to infer their detailed functions. In future work, one can seek to construct a negative set to evaluate the false positive rate directly. We utilized two human PPI networks with different confidence levels. In all cases, the inferences drawn from these two networks are consistent, demonstrating that the overall conclusions in this work should be with enough robustness. In some cases, we can extrapolate the impacts stemming from different confidence levels: the tendency of WD40 proteins to be hubs can be higher when incorporating PPI data with low confidence, but many false positive complex predictions could be introduced. This also suggests that a more sophisticated clique-based method should be developed in the future, e.g., by integrating the confidence score of each PPI in the network and by training proper parameters for selecting informative interactions automatically. In summary, we have conducted the first systematic and quantitative network analyses on human WD40 proteins. By comparing with non-WD40 proteins on several static topological properties and a dynamic feature by integrating co-expression data, our work demonstrated that the WD40 family tend to be intramodular hubs and be located near the global center of the whole network, providing clues about their roles in organizing the PPI network. In addition, these findings have quantitatively confirmed that the previous structure-based inference that the WD40 protein family may often act as scaffolds to assemble complexes. Finally, we have effectively predicted the WD40 protein-associated complexes by using a clique-base method. The quantitative features analyzed in this work and the predicted complexes, can serve as clues for inferring putative functions and prioritizing candidates for further studies. Protein-protein interaction data set and network construction The human PPI data set was downloaded from the HIPPIE database [26, 27] (v2.0, release 2016–05-24), which presents one of the most comprehensive human PPI data set. It has integrated experimentally detected PPIs extracted from MINT [16], MIPS [17], HPRD [18], IntAct [59], BioGRID [60], DIP [61] and BIND [62], and has also implemented a confidence scoring system weighting the amount and quality of the evidences for each interaction. The larger the score (ranging from 0 to 1) is, the higher the confidence is. After downloading the data set (273,927 interactions at the moment when accessed), we further cleaned it by removing PPIs lacking the UniProt ID [34] or describing self-interactions, and the repetitive interactions were also merged. This process of data cleaning resulted in the ALL-PPI data set. Based on it, we curated the HC-PPI (high confidence PPI) data sets by keeping only the PPIs whose confidence scores are at least 0.72, the third quartile of all the scores, which was also suggested for filtering out potential false positive interactions by the authors of HIPPIE. In practice, as the confidence scores take values from 65 different ones, and more than 25,000 interactions have the score of 0.72, the percentage of interactions in HC-PPI to those in ALL-PPI was greater than 25%. A list of 262 human WD40 proteins was retrieved from previous work [2], and was adopted to label the WD40 proteins in ALL-PPI and HC-PPI. All other proteins were treated as non-WD40s. A PPI network is defined as a graph, where nodes and edges represent proteins and their interactions, respectively. In the network, there may be isolated components without any edge connecting them, and the largest one is referred to as the main component. We adopted Cytoscape [63] to construct the PPI networks for the ALL-PPI and HC-PPI, and the topological parameters were calculated by NetworkAnalyzer [63]. Centralities and other properties comparison between WD40 and non-WD40 proteins Centralities are basic network properties to characterize each node or edge with respect to their positions within the network. The comparison of centrality between WD40 and non-WD40 proteins was mainly conducted by utilizing the degree measure, which is the most intuitive. Other measures including betweenness, closeness, and clustering coefficient were also attempted. The degree of a node in network is the number of its direct links with other nodes. In PPI network, a highly connected protein (say, degree greater than 5) is defined as a 'hub', as described in previous publications [23, 24]. We mainly adopted this cutoff to define hubs, and the cutoff of 10 and 15 were also used for extended comparisons. The ratios of hubs to non-hubs were calculated for WD40 and non-WD40 proteins, respectively. The odds ratio (OR) was defined by dividing this ratio in WD40 proteins to that in non-WD40s. The χ2 test was adopted to measure the statistical significance that the odds ratio differs from 1. Betweenness centrality of a node in network reflects the amount of control that this node exerts over interactions of other nodes in the network [64]. The betweenness of node n is calculated as follows: $$ {C}_b(n)={\sum}_{s\ne n\ne t}\left({\sigma}_{st}(n)/{\sigma}_{st}\right), $$ where s and t are nodes in the network different from node n, σ st denotes the number of shortest paths from s and t, and σ st (n) is the number of shortest paths from s to t that node n lies on. In NetworkAnalyzer, the betweenness value for each node n is further normalized by dividing by the number of node pairs excluding node n. Closeness centrality measures how fast information spreads from a given node to other reachable nodes in the network. Closeness centrality of node n is defined as the reciprocal of the average shortest path length [65], and it can be calculated as follows: $$ {C}_c(n)=1/ avg\left(L\left(m,n\right)\right), $$ where L(m, n) is the length of shortest path between node n and m, and m denotes any other nodes that are reachable to node n. Stress centrality of a node n is calculated by the number of shortest paths passing through node n. A high stress centrality means traversed by a lot of shortest paths [66]. In PPI networks, the clustering coefficient of a node n is defined as follows: $$ {C}_n=2{e}_n/\left({k}_n\left({k}_n-1\right)\right), $$ where k n denotes the number of neighbors of node n, e n is the number of connected pairs between all neighbors. This property measures the trend of forming a cluster by node n and its neighbors [28]. All the network properties described above were calculated for each protein in both HC-PPI network and ALL-PPI network through NetworkAnalyzer [63]. Direct comparisons of them between WD40 and non-WD40 proteins were performed by using single-tailed Mann-Whitney U test. The fold changes measuring the ratio of median degree of WD40 proteins to that of non-WD40 were also calculated. Expression patterns of top high- and low-degree WD40 proteins were retrieved directly from a previous study [2]. They were based on the RNA-seq data set in the Human Protein Atlas project [35], which was further utilized in the following sections in this study. Proteins expressed in all 27 tissues with FPKM > 10 are defined as "High in all tissues", and those expressed in most (but not all) tissues with FPKM > 10 are defined as "High in many tissues". Proteins expressed in one tissue with FPKM 5 or more times greater than in all other tissues are defined as "Tissue-specific", and those demonstrate expression preference in specific tissues, but fold changes were less than 5, are named as "Tissue-preferential". k-core decomposition of PPI network The k-core decomposition [67] of a PPI network was carried out by iteratively removing all nodes with degree less than k until all the remaining nodes have degrees of at least k. The remaining part is named as the k-core subnetwork accordingly. When the k increases stepwise from 1, the locations of the remaining nodes go from the periphery to the center of the whole network (Additional file 1: Figure S9a). This decomposition process splits the network into different layers from outside to inside, where the layer k contains proteins in the k-core subnetwork but excluding those in the (k + 1)-core subnetwork. Each protein in layer k can then be assigned the value of k (i.e., the k-value) to describe its layer location. The larger the k-value is, the closer to the center of the whole network (i.e., the global center) the node is. Nodes with high degrees but low k-values are hubs located at the periphery, and are named as local centers (Additional file 1: Figure S9b) [21]. The k-core decomposition described above was applied to the HC-PPI and ALL-PPI network respectively. For comparison, the median k-values for WD40 and non-WD40 proteins in each network, and the percentage of WD40s in each k-core subnetwork were calculated respectively. A fold change was measured by the ratio of median k-value of WD40 proteins to that of non-WD40s. The list of human essential genes was retrieved from a previous study [36]. It contains 1299 genes which were integrated from four distinct sources. Evolutionary conservation analysis of WD40 proteins near the global center was performed by checking their orthologs in other model eukaryotes in the Inparanoid database [68]. Analysis of the intramodular preference for WD40 hubs Using the gene expression data from a series of different tissues, one can calculate the Pearson's correlation coefficient (PCC) to quantify the extent to which a pair of interacting proteins were co-expressed. Here, the expression data of a gene was represented as a vector consisting of the same number of components as the number of different tissues. According to previous studies in yeast and human interactome [23, 24], the average of all PCCs between a hub and its interacting partners can be adopted to identify whether interactions of this hub are context-specific (low average PCC) or constitutive (high average PCC), and this hub is referred to as intermodular or intramodular accordingly. The distribution of the average PCCs of the WD40 hubs was compared with that of non-WD40 hubs. As a control, we also generated a random distribution of the average PCCs of all hub proteins. In brief, the associations between the expression vectors and proteins were shuffled, and then the average PCC for each hub protein was re-calculated to generate this random distribution. The same analyses were carried out on HC-PPI and ALL-PPI network, respectively. For each one, both RNA-level and protein-level expression data set were considered independently. For the RNA-level expression, we used the RNA-seq data set in the Human Protein Atlas project (ArrayExpress ID: E-MTAB-1733) [35]. This data set contains the RNA expression levels in terms of the FPKM values for 20,050 protein-coding genes in 27 different tissues from 95 samples. In each tissue, the FPKM values of a gene from different samples were averaged to represent its expression level, and its expression values from different tissues constitute the expression vector. The expression vectors of a protein pair were used for the PCC calculation. The UniProt ID mapping tool [34] and bioDBnet [69] were adopted to map the IDs in the RNA-seq data set and the protein IDs in the PPI networks. After ID mapping and deletion of ambiguities, 15,358 and 10,751 proteins in the ALL-PPI and the HC-PPI network were assigned expression data, respectively. For the protein-level expression, we utilized the data from Human Proteome Map [70], which contains expression information for more than 30,000 proteins in 30 human tissues. After ID mapping and deletion of ambiguities, 13,764 and 10,003 proteins in the ALL-PPI and the HC-PPI network were assigned expression data, respectively. Complex predictions In the clique-based method, we took three simple steps to mine WD40-associated complexes in the PPI network. First, we extracted a subnetwork that only contains WD40 proteins and their directly connected neighbors (first-order neighbors). Second, all maximal cliques were identified based on the algorithm developed by Bron et al. [71]. Third, maximal cliques with size greater than 2 and containing at least one human WD40 proteins were chosen to be potential WD40-associated complexes. Since some cliques generated above may overlap with others, two cliques can be merged according to a specified merging parameter that measures the proportion of overlapped protein number to that in the smaller clique. To determine to what extent the overlapped cliques should be merged together, we tried several merging parameters (50%, 60%, 70%, 80%, 90%, and 100%), resulting in a series of predicted complex sets (namely M05, M06, M07, M08, M09, and M10, respectively). For example, predicted complex set M05 is obtained from iteratively merging cliques that share 50% nodes, and M10 means no merging at all. The reference complex data set contains all 234 experimentally identified human WD40-associated protein complexes extracted from the CORUM database [43], where 90 human WD40 proteins were involved. The overlap score ω [44], which was used to determine whether a predicted complex "matches" one of the complexes in the reference set, was defined as: $$ \upomega \kern0.5em =\kern0.5em \frac{{\left|A\kern0.5em \cap \kern0.5em B\right|}^2}{\left|A\right|\ast \left|B\right|}, $$ where A and B represent the complex A and complex B, and |A| and |B| represent the number of proteins in them, respectively. We tried a series of scores of ω comprehensively, from 0.0 to 1.0, to evaluate our complex prediction results from different merging parameters mentioned above. This evaluation could help to choose a proper merging parameter. For comparison, other methods including MCODE [44], ClusterOne [45], and MCL [46] were also attempted. These methods took two steps to predict WD40 protein-associated complexes from the main component of the PPI network. First, they detected all so-called "modules" in the main component of the PPI network; Second, "modules" with size of at least 3 and containing at least one WD40 protein were kept as potential complexes. According to the recommendation of the original literatures, default settings were chosen for both ClusterOne and MCODE, whereas three different values (1.5, 2.0, and 4.0) were used to control the granularity for MCL, respectively. To compare different complex prediction methods, we calculated the numbers of matched reference complexes by predicted complexes from each method, and further utilized a measure called maximal matching ratio (MMR) [45], which is a well-known index that evaluates the overall level of overlap between the matched reference complexes and the predicted complexes that matching these reference complexes. Co-expression scores of predicted complex set, randomized protein set, and reference set Co-expression score of a complex (or a protein set) was calculated through two steps. In the first step, we calculated the PCCs between any two proteins within the complex. In the second step, the mean of these PCCs was computed as the co-expression score of this complex. The randomized data set, which was used for comparison, was generated by random sampling from the HC-PPI network. It contained the same number of "decoy complexes" as that in the predicted set. The numbers of member proteins were also the same as those in the predicted set, but they were randomly chosen from the main component of HC-PPI network. The expression data sets used here were the same as described in the calculation of average PCCs for hub proteins. The co-expression scores of the predicted complexes, the reference complexes, and the "decoy complexes" were all independently calculated based on the protein-level expression and the RNA-level expression data. ALL-PPI: all PPI data obtained from the HIPPIE database after data cleaning HC-PPI: high confidence PPI data from ALL-PPI with confidence score greater than the third quantile (0.72) HIPPIE: Human integrated protein-protein interaction reference PCC: Pearson's correlation coefficient Stirnimann CU, Petsalaki E, Russell RB, Muller CW. WD40 proteins propel cellular networks. Trends Biochem Sci. 2010;35(10):565–74. Zou XD, Hu XJ, Ma J, Li T, Ye ZQ, Wu YD. Genome-wide analysis of WD40 protein family in human. Sci Rep. 2016;6:39262. Xu C, Min J. Structure and function of WD40 domain proteins. Protein Cell. 2011;2(3):202–14. Zhang C, Zhang F. The multifunctions of WD40 proteins in genome integrity and cell cycle progression. J Genomics. 2015;3:40–50. Gaudet R, Bohm A, Sigler PB. Crystal structure at 2.4 angstroms resolution of the complex of transducin betagamma and its regulator, phosducin. Cell. 1996;87(3):577–88. Cardozo T, Pagano M. The SCF ubiquitin ligase: insights into a molecular machine. Nat Rev Mol Cell Biol. 2004;5(9):739–51. Wu XH, Wang Y, Zhuo Z, Jiang F, Wu YD. Identifying the hotspots on the top faces of WD40-repeat proteins from their primary sequences by beta-bulges and DHSW tetrads. PLoS One. 2012;7(8):e43005. Jennings BH, Pickles LM, Wainwright SM, Roe SM, Pearl LH, Ish-Horowicz D. Molecular recognition of transcriptional repressor motifs by the WD domain of the Groucho/TLE corepressor. Mol Cell. 2006;22(5):645–55. Johnston CA, Kimple AJ, Giguere PM, Siderovski DP. Structure of the parathyroid hormone receptor C terminus bound to the G-protein dimer Gbeta1gamma2. Structure. 2008;16(7):1086–94. Skaar JR, Pagan JK, Pagano M. SCF ubiquitin ligase-targeted therapies. Nat Rev Drug Discov. 2014;13(12):889–903. Hao B, Oehlmann S, Sowa ME, Harper JW, Pavletich NP. Structure of a Fbw7-Skp1-cyclin E complex: multisite-phosphorylated substrate recognition by SCF ubiquitin ligases. Mol Cell. 2007;26(1):131–43. Oliver AW, Swift S, Lord CJ, Ashworth A, Pearl LH. Structural basis for recruitment of BRCA2 by PALB2. EMBO Rep. 2009;10(9):990–6. Yu H, Braun P, Yildirim MA, Lemmens I, Venkatesan K, Sahalie J, et al. High-quality binary protein interaction map of the yeast interactome network. Science. 2008;322(5898):104–10. Collins SR, Kemmeren P, Zhao XC, Greenblatt JF, Spencer F, Holstege FC, et al. Toward a comprehensive atlas of the physical interactome of Saccharomyces cerevisiae. Mol Cell Proteomics. 2007;6(3):439–50. Huttlin EL, Ting L, Bruckner RJ, Gebreab F, Gygi MP, Szpyt J, et al. The BioPlex network: a systematic exploration of the human Interactome. Cell. 2015;162(2):425–40. Zanzoni A, Montecchi-Palazzi L, Quondam M, Ausiello G, Helmer-Citterich M, Cesareni G. MINT: a molecular INTeraction database. FEBS Lett. 2002;513(1):135–40. Pagel P, Kovac S, Oesterheld M, Brauner B, Dunger-Kaltenbach I, Frishman G, et al. The MIPS mammalian protein-protein interaction database. Bioinformatics. 2005;21(6):832–4. Keshava Prasad TS, Goel R, Kandasamy K, Keerthikumar S, Kumar S, Mathivanan S, et al. Human protein reference database--2009 update. Nucleic Acids Res. 2009;37(Database):D767–72. Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, et al. The STRING database in 2017: quality-controlled protein-protein association networks, made broadly accessible. Nucleic Acids Res. 2017;45(D1):D362–D8. Jeong H, Mason SP, Barabasi AL, Oltvai ZN. Lethality and centrality in protein networks. Nature. 2001;411(6833):41–2. Wuchty S, Almaas E. Peeling the yeast protein network. Proteomics. 2005;5(2):444–9. Ideker T, Sharan R. Protein networks in disease. Genome Res. 2008;18(4):644–52. Han JD, Bertin N, Hao T, Goldberg DS, Berriz GF, Zhang LV, et al. Evidence for dynamically organized modularity in the yeast protein-protein interaction network. Nature. 2004;430(6995):88–93. Taylor IW, Linding R, Warde-Farley D, Liu Y, Pesquita C, Faria D, et al. Dynamic modularity in protein interaction networks predicts breast cancer outcome. Nat Biotechnol. 2009;27(2):199–204. Spirin V, Mirny LA. Protein complexes and functional modules in molecular networks. Proc Natl Acad Sci U S A. 2003;100(21):12123–8. Schaefer MH, Fontaine JF, Vinayagam A, Porras P, Wanker EE, Andrade-Navarro MA. HIPPIE: integrating protein interaction networks with experiment based quality scores. PLoS One. 2012;7(2):e31826. Alanis-Lobato G, Andrade-Navarro MA, Schaefer MH. HIPPIE v2.0: enhancing meaningfulness and reliability of protein-protein interaction networks. Nucleic Acids Res. 2017;45(Database issue):D408–D14. Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004;5(2):101–13. Neer EJ, Schmidt CJ, Nambudripad R, Smith TF. The ancient regulatory-protein family of WD-repeat proteins. Nature. 1994;371(6495):297–300. Busino L, Donzelli M, Chiesa M, Guardavaccaro D, Ganoth D, Dorrello NV, et al. Degradation of Cdc25A by beta-TrCP during S phase and in response to DNA damage. Nature. 2003;426(6962):87–91. Watanabe N, Arai H, Nishihara Y, Taniguchi M, Watanabe N, Hunter T, et al. M-phase kinases induce phospho-dependent ubiquitination of somatic Wee1 by SCFbeta-TrCP. Proc Natl Acad Sci U S A. 2004;101(13):4419–24. He YJ, McCall CM, Hu J, Zeng Y, Xiong Y. DDB1 functions as a linker to recruit receptor WD40 proteins to CUL4-ROC1 ubiquitin ligases. Genes Dev. 2006;20(21):2949–54. Cang Y, Zhang J, Nicholas SA, Bastien J, Li B, Zhou P, et al. Deletion of DDB1 in mouse brain and lens leads to p53-dependent elimination of proliferating cells. Cell. 2006;127(5):929–40. The UPC. UniProt: the universal protein knowledgebase. Nucleic Acids Res. 2017;45(D1):D158–D69. Fagerberg L, Hallstrom BM, Oksvold P, Kampf C, Djureinovic D, Odeberg J, et al. Analysis of the human tissue-specific expression by genome-wide integration of transcriptomics and antibody-based proteomics. Mol Cell Proteomics. 2014;13(2):397–406. Zhang W, Landback P, Gschwend AR, Shen B, Long M. New genes drive the evolution of gene interaction networks in the human and mouse genomes. Genome Biol. 2015;16:202. Hemsley PA, Hurst CH, Kaliyadasa E, Lamb R, Knight MR, De Cothi EA, et al. The Arabidopsis mediator complex subunits MED16, MED14, and MED2 regulate mediator and RNA polymerase II recruitment to CBF-responsive cold-regulated genes. Plant Cell. 2014;26(1):465–84. Larsson M, Uvell H, Sandstrom J, Ryden P, Selth LA, Bjorklund S. Functional studies of the yeast med5, med15 and med16 mediator tail subunits. PLoS One. 2013;8(8):e73137. Roadcap DW, Clemen CS, Bear JE. The role of mammalian coronins in development and disease. Subcell Biochem. 2008;48:124–35. Liu G, Wong L, Chua HN. Complex discovery from weighted PPI networks. Bioinformatics. 2009;25(15):1891–7. Adamcsek B, Palla G, Farkas IJ, Derenyi I, Vicsek T. CFinder: locating cliques and overlapping modules in biological networks. Bioinformatics. 2006;22(8):1021–3. Li XL, Tan SH, Foo CS, Ng SK. Interaction graph mining for protein complexes using local clique merging. Genome Inform. 2005;16(2):260–9. Ruepp A, Brauner B, Dunger-Kaltenbach I, Frishman G, Montrone C, Stransky M, et al. CORUM: the comprehensive resource of mammalian protein complexes. Nucleic Acids Res. 2008;36(Database issue):D646–50. Bader GD, Hogue CW. An automated method for finding molecular complexes in large protein interaction networks. BMC Bioinformatics. 2003;4:2. Nepusz T, Yu H, Paccanaro A. Detecting overlapping protein complexes in protein-protein interaction networks. Nat Methods. 2012;9(5):471–2. Enright AJ, Van Dongen S, Ouzounis CA. An efficient algorithm for large-scale detection of protein families. Nucleic Acids Res. 2002;30(7):1575–84. Liou AK, Willison KR. Elucidation of the subunit orientation in CCT (chaperonin containing TCP1) from the subunit composition of CCT micro-complexes. EMBO J. 1997;16(14):4311–6. Haren L, Remy MH, Bazin I, Callebaut I, Wright M, Merdes A. NEDD1-dependent recruitment of the gamma-tubulin ring complex to the centrosome is necessary for centriole duplication and spindle assembly. J Cell Biol. 2006;172(4):505–15. Hutchins JR, Toyoda Y, Hegemann B, Poser I, Heriche JK, Sykora MM, et al. Systematic analysis of human protein complexes identifies chromosome segregation proteins. Science. 2010;328(5978):593–9. Melki R, Vainberg IE, Chow RL, Cowan NJ. Chaperonin-mediated folding of vertebrate actin-related protein and gamma-tubulin. J Cell Biol. 1993;122(6):1301–10. Bedford L, Paine S, Sheppard PW, Mayer RJ, Roelofs J. Assembly, structure, and function of the 26S proteasome. Trends Cell Biol. 2010;20(7):391–401. Yao T, Song L, Xu W, DeMartino GN, Florens L, Swanson SK, et al. Proteasome recruitment and activation of the Uch37 deubiquitinating enzyme by Adrm1. Nat Cell Biol. 2006;8(9):994–1002. Park Y, Hwang YP, Lee JS, Seo SH, Yoon SK, Yoon JB. Proteasomal ATPase-associated factor 1 negatively regulates proteasome activity by interacting with proteasomal ATPases. Mol Cell Biol. 2005;25(9):3842–53. Joy MP, Brock A, Ingber DE, Huang S. High-betweenness proteins in the yeast protein interaction network. J Biomed Biotechnol. 2005;2005(2):96–103. Yook SH, Oltvai ZN, Barabasi AL. Functional and topological characterization of protein interaction networks. Proteomics. 2004;4(4):928–42. Scardoni G, Laudanna C. Centralities based analysis of complex networks. New Frontiers in graph theory: InTech; 2012. Wang Y, Hu XJ, Zou XD, Wu XH, Ye ZQ, Wu YD. WDSPdb: a database for WD40-repeat proteins. Nucleic Acids Res. 2015;43(Database issue):D339–44. Hu XJ, Li T, Wang Y, Xiong Y, Wu XH, Zhang DL, et al. Prokaryotic and highly-repetitive WD40 proteins: a systematic study. Sci Rep. 2017;7(1):10585. Kerrien S, Aranda B, Breuza L, Bridge A, Broackes-Carter F, Chen C, et al. The IntAct molecular interaction database in 2012. Nucleic Acids Res. 2012;40(Database issue):D841–6. Stark C, Breitkreutz BJ, Reguly T, Boucher L, Breitkreutz A, Tyers M. BioGRID: a general repository for interaction datasets. Nucleic Acids Res. 2006;34(Database issue):D535–9. Salwinski L, Miller CS, Smith AJ, Pettit FK, Bowie JU, Eisenberg D. The database of interacting proteins: 2004 update. Nucleic Acids Res. 2004;32(Database issue):D449–51. Isserlin R, El-Badrawi RA, Bader GD. The Biomolecular Interaction Network Database in PSI-MI 2.5. Database (Oxford). 2011;2011:baq037. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504. Yoon J, Blumer A, Lee K. An algorithm for modularity analysis of directed and weighted biological networks based on edge-betweenness centrality. Bioinformatics. 2006;22(24):3106–8. Newman ME. A measure of betweenness centrality based on random walks. Soc Networks. 2005;27(1):39–54. Brandes U. A faster algorithm for betweenness centrality. J Math Sociol. 2001;25(2):163–77. Alvarez-Hamelin JI, Dall'Asta L, Barrat A, Vespignani A. K-core decomposition: a tool for the visualization of large scale networks. arXiv preprint cs/0504107. 2005. Sonnhammer EL, Ostlund G. InParanoid 8: orthology analysis between 273 proteomes, mostly eukaryotic. Nucleic Acids Res. 2015;43(Database issue):D234–9. Mudunuri U, Che A, Yi M, Stephens RM. bioDBnet: the biological database network. Bioinformatics. 2009;25(4):555–6. Kim MS, Pinto SM, Getnet D, Nirujogi RS, Manda SS, Chaerkady R, et al. A draft map of the human proteome. Nature. 2014;509(7502):575–81. Bron C, Kerbosch J. Finding all cliques of an undirected graph [H]. Commun ACM. 1973;16(9):575–7. The authors would like to thank Dr. Fan Jiang, Dr. Olaf Wiest, and Dr. Yang Wang for their valuable suggestions and discussions, and they would also like to thank the reviewers for the constructive suggestions. ZQY was supported by the National Natural Science Foundation of China (31471243) and the Shenzhen Basic Research Program (JCYJ20150529095420031). XDZ was partly supported by the Shenzhen Basic Research Program (JCYJ20160428154108239). YDW was supported by the Guangdong Government Program (Leading Talents Introduction Special Funds) and the Shenzhen Basic Research Program (JCYJ20170412150507046). The publication charge of this article comes from the National Natural Science Foundation of China (31471243). The data sets in this study are available as Additional files. This article has been published as part of BMC Systems Biology volume 12 supplement 4, 2018: Selected papers from the 11th international conference on systems biology (ISB 2017). The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-4. Lab of Computational Chemistry and Drug Design, Laboratory of Chemical Genomics, Peking University Shenzhen Graduate School, Shenzhen, 518055, People's Republic of China Xu-Dong Zou, Ke An, Yun-Dong Wu & Zhi-Qiang Ye College of Chemistry, Peking University, Beijing, 100871, People's Republic of China Yun-Dong Wu Xu-Dong Zou Ke An Zhi-Qiang Ye XDZ performed the centrality analysis, network decomposition, and expression coefficient analysis, drafted and revised the manuscript. KA participated in complex prediction and manuscript writing. YDW conceived and supervised this study. ZQY conceived and supervised this study, guided the statistical tests, drafted and revised the manuscript. All the authors have read and approved the final manuscript. Correspondence to Yun-Dong Wu or Zhi-Qiang Ye. Figure S1. Workflow of this study. Figure S2. Overview of the ALL-PPI network. Figure S3. Degree distributions of nodes in two networks. Figure S4. Percentage of WD40 proteins in each k-core subnetwork during the decomposition of ALL-PPI network. Figure S5. Distributions of average PCCs of WD40 hubs, non-WD40 hubs, and randomized data in ALL-PPI network. Figure S6. The number of reference complexes matched with predicted complexes obtained by different methods. Figure S7. The number of reference complexes matched with predicted complexes obtained from ALL-PPI network under different ω. Figure S8. Number of complexes in reference set matched with predicted complexes obtained from different PPI networks. Figure S9. k-core decomposition and localization of hubs in PPI network. Table S2. Counts of hubs in WD40 and non-WD40 proteins in ALL-PPI network. Table S3. Counts of hubs in WD40 and non-WD40 under different definitions. Table S4. Comparisons of centralities between WD40 and non-WD40 in two networks. Table S5. WD40 proteins in different layers obtained by k-core decomposition of ALL-PPI network. Table S6. Orthologs of MED16, GBLP, and CORO1C in model organisms. Table S7. Medians of average PCCs of WD40 hubs and non-WD40 hubs in two networks. Table S9. Statistics of complex predictions based on HC-PPI under different parameter settings. Table S11. Number of predicted complexes under different methods, and the matched numbers of reference complexes under different ω. Table S12. Comparisons of MMR for different prediction methods with ω> = 0.2. Table S13. Statistics of complex prediction obtained from ALL-PPI under different parameter settings. Table S14. Medians of co-expression scores for different complex sets under different expression dataset. (PDF 1014 kb) Table S1. ALL-PPI (including HC-PPI) interactions annotated with confidence scores and PCCs. (XLSX 9754 kb) Table S8. Maximal cliques derived from HC_PPI network. (XLSX 103 kb) Table S10. Reference human WD40 complexes derived from the CORUM database. (XLSX 27 kb) Zou, XD., An, K., Wu, YD. et al. PPI network analyses of human WD40 protein family systematically reveal their tendency to assemble complexes and facilitate the complex predictions. BMC Syst Biol 12, 41 (2018). https://doi.org/10.1186/s12918-018-0567-9 WD40 Protein Human WD Protein WD40 Family Complex Prediction Clique-based Method
CommonCrawl
Elliott C Cheu Scholarly Works (1416) Dive into the research topics where Elliott C Cheu is active. These topic labels come from the works of this person. Together they form a unique fingerprint. leptons Physics & Astronomy 42% transverse momentum Physics & Astronomy 42% bosons Physics & Astronomy 39% Higgs bosons Physics & Astronomy 38% Scholarly Works per year 1989 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Scholarly Works per year AtlFast3: The Next Generation of Fast Simulation in ATLAS Atlas Collaboration, Dec 2022, In: Computing and Software for Big Science. 6, 1, 7. Physics 100% Program processors 97% Hadrons 90% Colliding beam accelerators 81% Calorimeters 67% Constraints on Higgs boson production with large transverse momentum using H →b b ¯ decays in the ATLAS detector (ATLAS Collaboration), May 1 2022, In: Physical Review D. 105, 9, 092003. Higgs bosons 100% transverse momentum 99% kinematics 41% Constraints on Higgs boson properties using WW∗(→ eνμν) jj production in 36.1fb-1 of √s=13 TeV pp collisions with the ATLAS detector Atlas Collaboration, Jul 2022, In: European Physical Journal C. 82, 7, 622. Bosons 100% Higgs bosons 57% collisions 35% Determination of the parton distribution functions of the proton using diverse ATLAS data from pp collisions at √s=7 , 8 and 13 TeV Atlas Collaboration, May 2022, In: European Physical Journal C. 82, 5, 438. Protons 100% Distribution functions 85% partons 84% Bosons 59% Direct constraint on the Higgs–charm coupling from a search for Higgs boson decays into charm quarks with the ATLAS detector Atlas Collaboration, Aug 2022, In: European Physical Journal C. 82, 8, 717. quarks 42% confidence 39% View all 1416 Research outputs Measurement of the differential cross-sections of prompt and non-prompt production of $J/\psi$ and $\psi(2\mathrm{S})$ in $pp$ collisions at $\sqrt{s} = 7$ and $8$ TeV with the ATLAS detector Aad, G. (Contributor), Abbott, B. (Contributor), Abdallah, J. (Contributor), Abdinov, O. (Contributor), Aben, R. (Contributor), Abolins, M. (Contributor), Abou Zeid, Z. O. S. (Contributor), Abramowicz, H. (Contributor), Abreu, H. (Contributor), Abreu, R. (Contributor), Abulaiti, Y. (Contributor), Acharya, B. S. (Contributor), Adamczyk, L. (Contributor), Adams, D. L. (Contributor), Adelman, J. (Contributor), Adomeit, S. (Contributor), Adye, T. (Contributor), Affolder, A. A. (Contributor), Agatonovic-Jovin, T. (Contributor), Agricola, J. (Contributor), Aguilar-Saavedra, J. A. (Contributor), Ahlen, S. P. (Contributor), Ahmadov, F. (Contributor), Aielli, G. (Contributor), Akerstedt, H. (Contributor), Åkesson, T. P. A. (Contributor), Akimov, A. V. (Contributor), Alberghi, G. L. (Contributor), Albert, J. (Contributor), Albrand, S. (Contributor), Alconada Verzini, V. M. J. (Contributor), Aleksa, M. (Contributor), Aleksandrov, I. N. (Contributor), Alexa, C. (Contributor), Alexander, G. (Contributor), Alexopoulos, T. (Contributor), Alhroob, M. (Contributor), Alimonti, G. (Contributor), Alio, L. (Contributor), Alison, J. (Contributor), Alkire, S. P. (Contributor), Allbrooke, B. M. M. (Contributor), Allport, P. P. (Contributor), Aloisio, A. (Contributor), Alonso, A. (Contributor), Alonso, F. (Contributor), Alpigiani, C. (Contributor), Altheimer, A. (Contributor), Alvarez Gonzalez, G. B. (Contributor), Álvarez Piqueras, P. D. (Contributor), Alviggi, M. G. (Contributor), Amadio, B. T. (Contributor), Amako, K. (Contributor), Amaral Coutinho, C. Y. (Contributor), Amelung, C. (Contributor), Amidei, D. (Contributor), Amor Dos Santos, D. S. S. P. (Contributor), Amorim, A. (Contributor), Amoroso, S. (Contributor), Amram, N. (Contributor), Amundsen, G. (Contributor), Anastopoulos, C. (Contributor), Ancu, L. S. (Contributor), Andari, N. (Contributor), Andeen, T. (Contributor), Anders, C. F. (Contributor), Anders, G. (Contributor), Anders, J. K. (Contributor), Anderson, K. J. (Contributor), Andreazza, A. (Contributor), Andrei, V. (Contributor), Angelidakis, S. (Contributor), Angelozzi, I. (Contributor), Anger, P. (Contributor), Angerami, A. (Contributor), Anghinolfi, F. (Contributor), Anisenkov, A. V. (Contributor), Anjos, N. (Contributor), Annovi, A. (Contributor), Antonelli, M. (Contributor), Antonov, A. (Contributor), Antos, J. (Contributor), Anulli, F. (Contributor), Aoki, M. (Contributor), Aperio Bella, B. L. (Contributor), Arabidze, G. (Contributor), Arai, Y. (Contributor), Araque, J. P. (Contributor), Arce, A. T. H. (Contributor), Arduh, F. A. (Contributor), Arguin, J. (Contributor), Argyropoulos, S. (Contributor), Arik, M. (Contributor), Armbruster, A. J. (Contributor), Arnaez, O. (Contributor), Arnal, V. (Contributor), Arnold, H. (Contributor), Arratia, M. (Contributor), Arslan, O. (Contributor), Artamonov, A. (Contributor), Artoni, G. (Contributor), Asai, S. (Contributor), Asbah, N. (Contributor), Ashkenazi, A. (Contributor), Åsman, B. (Contributor), Asquith, L. (Contributor), Assamagan, K. (Contributor), Astalos, R. (Contributor), Atkinson, M. (Contributor), Atlay, N. B. (Contributor), Augsten, K. (Contributor), Aurousseau, M. (Contributor), Avolio, G. (Contributor), Axen, B. (Contributor), Ayoub, M. K. (Contributor), Azuelos, G. (Contributor), Baak, M. A. (Contributor), Baas, A. E. (Contributor), Baca, M. J. (Contributor), Bacci, C. (Contributor), Bachacou, H. (Contributor), Bachas, K. (Contributor), Backes, M. (Contributor), Backhaus, M. (Contributor), Bagiacchi, P. (Contributor), Bagnaia, P. (Contributor), Bai, Y. (Contributor), Bain, T. (Contributor), Baines, J. T. (Contributor), Baker, O. K. (Contributor), Baldin, E. M. (Contributor), Balek, P. (Contributor), Balestri, T. (Contributor), Balli, F. (Contributor), Balunas, W. K. (Contributor), Banas, E. (Contributor), Banerjee, S. W. (Contributor), Bannoura, A. A. E. (Contributor), Bansil, H. S. (Contributor), Barak, L. (Contributor), Barberio, E. L. (Contributor), Barberis, D. (Contributor), Barbero, M. (Contributor), Barillari, T. (Contributor), Barisonzi, M. (Contributor), Barklow, T. (Contributor), Barlow, N. (Contributor), Barnes, S. L. (Contributor), Barnett, B. M. (Contributor), Barnett, R. M. (Contributor), Barnovska, Z. (Contributor), Baroncelli, A. (Contributor), Barone, G. (Contributor), Barr, A. J. (Contributor), Barreiro, F. (Contributor), Barreiro Guimarães Da Costa, G. D. C. J. (Contributor), Bartoldus, R. (Contributor), Barton, A. E. (Contributor), Bartos, P. (Contributor), Basalaev, A. (Contributor), Bassalat, A. (Contributor), Basye, A. (Contributor), Bates, R. L. (Contributor), Batista, S. J. (Contributor), Batley, J. R. (Contributor), Battaglia, M. (Contributor), Bauce, M. (Contributor), Bauer, F. (Contributor), Bawa, H. S. (Contributor), Beacham, J. B. (Contributor), Beattie, M. D. (Contributor), Beau, T. (Contributor), Beauchemin, P. H. (Contributor), Beccherle, R. (Contributor), Bechtle, P. (Contributor), Beck, H. P. (Contributor), Becker, K. (Contributor), Becker, M. (Contributor), Beckingham, M. (Contributor), Becot, C. (Contributor), Beddall, A. J. (Contributor), Beddall, A. (Contributor), Bednyakov, V. A. (Contributor), Bee, C. P. (Contributor), Beemster, L. J. (Contributor), Beermann, T. A. (Contributor), Begel, M. (Contributor), Behr, J. K. (Contributor), Belanger-Champagne, C. (Contributor), Bell, W. H. (Contributor), Bella, G. (Contributor), Bellagamba, L. (Contributor), Bellerive, A. (Contributor), Bellomo, M. (Contributor), Belotskiy, K. (Contributor), Beltramello, O. (Contributor), Benary, O. (Contributor), Benchekroun, D. (Contributor), Bender, M. (Contributor), Bendtz, K. (Contributor), Benekos, N. (Contributor), Benhammou, Y. (Contributor), Benhar Noccioli, N. E. (Contributor), Benitez Garcia, G. J. A. (Contributor), Benjamin, D. P. (Contributor), Bensinger, J. R. (Contributor), Bentvelsen, S. (Contributor), Beresford, L. (Contributor), Beretta, M. (Contributor), Berge, D. (Contributor), Bergeaas Kuutmann, K. E. (Contributor), Berger, N. (Contributor), Berghaus, F. (Contributor), Beringer, J. (Contributor), Bernard, C. (Contributor), Bernard, N. R. (Contributor), Bernius, C. (Contributor), Bernlochner, F. U. (Contributor), Berry, T. (Contributor), Berta, P. (Contributor), Bertella, C. (Contributor), Bertoli, G. (Contributor), Bertolucci, F. (Contributor), Bertsche, C. (Contributor), Bertsche, D. (Contributor), Besana, M. I. (Contributor), Besjes, G. J. (Contributor), Bessidskaia Bylund, B. O. (Contributor), Bessner, M. (Contributor), Besson, N. (Contributor), Betancourt, C. (Contributor), Bethke, S. (Contributor), Bevan, A. J. (Contributor), Bhimji, W. (Contributor), Bianchi, R. M. (Contributor), Bianchini, L. (Contributor), Bianco, M. (Contributor), Biebel, O. (Contributor), Biedermann, D. (Contributor), Bieniek, S. P. (Contributor), Biglietti, M. (Contributor), Bilbao De Mendizabal, D. M. J. (Contributor), Bilokon, H. (Contributor), Bindi, M. (Contributor), Binet, S. (Contributor), Bingul, A. (Contributor), Bini, C. (Contributor), Biondi, S. (Contributor), Bjergaard, D. M. (Contributor), Black, C. W. (Contributor), Black, J. E. (Contributor), Black, K. M. (Contributor), Blackburn, D. (Contributor), Blair, R. E. (Contributor), Blanchard, J. (Contributor), Blanco, J. E. (Contributor), Blazek, T. (Contributor), Bloch, I. (Contributor), Blocker, C. (Contributor), Blum, W. (Contributor), Blumenschein, U. (Contributor), Bobbink, G. J. (Contributor), Bobrovnikov, V. S. (Contributor), Bocchetta, S. S. (Contributor), Bocci, A. (Contributor), Bock, C. (Contributor), Boehler, M. (Contributor), Bogaerts, J. A. (Contributor), Bogavac, D. (Contributor), Bogdanchikov, A. G. (Contributor), Bohm, C. (Contributor), Boisvert, V. (Contributor), Bold, T. (Contributor), Boldea, V. (Contributor), Boldyrev, A. S. (Contributor), Bomben, M. (Contributor), Bona, M. (Contributor), Boonekamp, M. (Contributor), Borisov, A. (Contributor), Borissov, G. (Contributor), Borroni, S. (Contributor), Bortfeldt, J. (Contributor), Bortolotto, V. (Contributor), Bos, K. (Contributor), Boscherini, D. (Contributor), Bosman, M. (Contributor), Boudreau, J. (Contributor), Bouffard, J. (Contributor), Bouhova-Thacker, E. V. (Contributor), Boumediene, D. (Contributor), Bourdarios, C. (Contributor), Bousson, N. (Contributor), Boveia, A. (Contributor), Boyd, J. (Contributor), Boyko, I. R. (Contributor), Bozic, I. (Contributor), Bracinik, J. (Contributor), Brandt, A. (Contributor), Brandt, G. (Contributor), Brandt, O. (Contributor), Bratzler, U. (Contributor), Brau, B. (Contributor), Brau, J. E. (Contributor), Braun, H. M. (Contributor), Brazzale, S. F. (Contributor), Breaden Madden, M. W. D. (Contributor), Brendlinger, K. (Contributor), Brennan, A. J. (Contributor), Brenner, L. (Contributor), Brenner, R. (Contributor), Bressler, S. (Contributor), Bristow, K. (Contributor), Bristow, T. M. (Contributor), Britton, D. (Contributor), Britzger, D. (Contributor), Brochu, F. M. (Contributor), Brock, I. (Contributor), Brock, R. (Contributor), Bronner, J. (Contributor), Brooijmans, G. (Contributor), Brooks, T. (Contributor), Brooks, W. K. (Contributor), Brosamer, J. (Contributor), Brost, E. (Contributor), Brown, J. (Contributor), Bruckman De Renstrom, D. R. P. A. (Contributor), Bruncko, D. (Contributor), Bruneliere, R. (Contributor), Bruni, A. (Contributor), Bruni, G. (Contributor), Bruschi, M. (Contributor), Bruscino, N. (Contributor), Bryngemark, L. (Contributor), Buanes, T. (Contributor), Buat, Q. (Contributor), Buchholz, P. (Contributor), Buckley, A. G. (Contributor), Buda, S. I. (Contributor), Budagov, I. A. (Contributor), Buehrer, F. (Contributor), Bugge, L. (Contributor), Bugge, M. K. (Contributor), Bulekov, O. (Contributor), Bullock, D. (Contributor), Burckhart, H. (Contributor), Burdin, S. (Contributor), Burgard, C. D. (Contributor), Burghgrave, B. (Contributor), Burke, S. (Contributor), Burmeister, I. (Contributor), Busato, E. (Contributor), Büscher, D. (Contributor), Büscher, V. (Contributor), Bussey, P. (Contributor), Butler, J. M. (Contributor), Butt, A. I. (Contributor), Buttar, C. M. (Contributor), Butterworth, J. M. (Contributor), Butti, P. (Contributor), Buttinger, W. (Contributor), Buzatu, A. (Contributor), Buzykaev, A. R. (Contributor), Cabrera Urbán, U. S. (Contributor), Caforio, D. (Contributor), Cairo, V. M. (Contributor), Cakir, O. (Contributor), Calace, N. (Contributor), Calafiura, P. (Contributor), Calandri, A. (Contributor), Calderini, G. (Contributor), Calfayan, P. (Contributor), Caloba, L. P. (Contributor), Calvet, D. (Contributor), Calvet, S. (Contributor), Camacho Toro, T. R. (Contributor), Camarda, S. (Contributor), Camarri, P. (Contributor), Cameron, D. (Contributor), Caminal Armadans, A. R. (Contributor), Campana, S. (Contributor), Campanelli, M. (Contributor), Campoverde, A. (Contributor), Canale, V. (Contributor), Canepa, A. (Contributor), Cano Bret, B. M. (Contributor), Cantero, J. (Contributor), Cantrill, R. (Contributor), Cao, T. (Contributor), Capeans Garrido, G. M. D. M. (Contributor), Caprini, I. (Contributor), Caprini, M. (Contributor), Capua, M. (Contributor), Caputo, R. (Contributor), Cardarelli, R. (Contributor), Cardillo, F. (Contributor), Carli, T. (Contributor), Carlino, G. (Contributor), Carminati, L. (Contributor), Caron, S. (Contributor), Carquin, E. (Contributor), Carrillo-Montoya, G. D. (Contributor), Carter, J. R. (Contributor), Carvalho, J. (Contributor), Casadei, D. (Contributor), Casado, M. P. (Contributor), Casolino, M. (Contributor), Castaneda-Miranda, E. (Contributor), Castelli, A. (Contributor), Castillo Gimenez, G. V. (Contributor), Castro, N. F. (Contributor), Catastini, P. (Contributor), Catinaccio, A. (Contributor), Catmore, J. R. (Contributor), Cattai, A. (Contributor), Caudron, J. (Contributor), Cavaliere, V. (Contributor), Cavalli, D. (Contributor), Cavalli-Sforza, M. (Contributor), Cavasinni, V. (Contributor), Ceradini, F. (Contributor), Cerio, B. C. (Contributor), Cerny, K. (Contributor), Cerqueira, A. S. (Contributor), Cerri, A. (Contributor), Cerrito, L. (Contributor), Cerutti, F. (Contributor), Cerv, M. (Contributor), Cervelli, A. (Contributor), Cetin, S. A. (Contributor), Chafaq, A. (Contributor), Chakraborty, D. (Contributor), Chalupkova, I. (Contributor), Chang, P. (Contributor), Chapman, J. D. (Contributor), Charlton, D. G. (Contributor), Chau, C. C. (Contributor), Chavez Barajas, B. C. A. (Contributor), Cheatham, S. (Contributor), Chegwidden, A. (Contributor), Chekanov, S. (Contributor), Chekulaev, S. V. (Contributor), Chelkov, G. A. (Contributor), Chelstowska, M. A. (Contributor), Chen, C. (Contributor), Chen, H. (Contributor), Chen, K. (Contributor), Chen, L. (Contributor), Chen, S. (Contributor), Chen, X. (Contributor), Chen, Y. (Contributor), Cheng, H. C. (Contributor), Cheng, Y. (Contributor), Cheplakov, A. (Contributor), Cheremushkina, E. (Contributor), Cherkaoui El Moursli, E. M. R. (Contributor), Chernyatin, V. (Contributor), Cheu, E. C. (Contributor), Chevalier, L. (Contributor), Chiarella, V. (Contributor), Chiarelli, G. (Contributor), Chiodini, G. (Contributor), Chisholm, A. S. (Contributor), Chislett, R. T. (Contributor), Chitan, A. (Contributor), Chizhov, M. V. (Contributor), Choi, K. (Contributor), Chouridou, S. (Contributor), Chow, B. K. B. (Contributor), Christodoulou, V. (Contributor), Chromek-Burckhart, D. (Contributor), Chudoba, J. (Contributor), Chuinard, A. J. (Contributor), Chwastowski, J. J. (Contributor), Chytka, L. (Contributor), Ciapetti, G. (Contributor), Ciftci, A. K. (Contributor), Cinca, D. (Contributor), Cindro, V. (Contributor), Cioara, I. A. (Contributor), Ciocio, A. (Contributor), Cirotto, F. (Contributor), Citron, Z. H. (Contributor), Ciubancan, M. (Contributor), Clark, A. (Contributor), Clark, B. L. (Contributor), Clark, P. J. (Contributor), Clarke, R. N. (Contributor), Cleland, W. (Contributor), Clement, C. (Contributor), Coadou, Y. (Contributor), Cobal, M. (Contributor), Coccaro, A. (Contributor), Cochran, J. (Contributor), Coffey, L. (Contributor), Cogan, J. G. (Contributor), Colasurdo, L. (Contributor), Cole, B. (Contributor), Cole, S. (Contributor), Colijn, A. P. (Contributor), Collot, J. (Contributor), Colombo, T. (Contributor), Compostella, G. (Contributor), Conde Muiño, M. P. (Contributor), Coniavitis, E. (Contributor), Connell, S. H. (Contributor), Connelly, I. A. (Contributor), Consorti, V. (Contributor), Constantinescu, S. (Contributor), Conta, C. (Contributor), Conti, G. (Contributor), Conventi, F. (Contributor), Cooke, M. (Contributor), Cooper, B. D. (Contributor), Cooper-Sarkar, A. M. (Contributor), Cornelissen, T. (Contributor), Corradi, M. (Contributor), Corriveau, F. (Contributor), Corso-Radu, A. (Contributor), Cortes-Gonzalez, A. (Contributor), Cortiana, G. (Contributor), Costa, G. (Contributor), Costa, M. J. (Contributor), Costanzo, D. (Contributor), Côté, D. (Contributor), Cottin, G. (Contributor), Cowan, G. (Contributor), Cox, B. E. (Contributor), Cranmer, K. (Contributor), Cree, G. (Contributor), Crépé-Renaudin, S. (Contributor), Crescioli, F. (Contributor), Cribbs, W. A. (Contributor), Crispin Ortuzar, O. M. (Contributor), Cristinziani, M. (Contributor), Croft, V. (Contributor), Crosetti, G. (Contributor), Cuhadar Donszelmann, D. T. (Contributor), Cummings, J. (Contributor), Curatolo, M. (Contributor), Cúth, J. (Contributor), Cuthbert, C. (Contributor), Czirr, H. (Contributor), Czodrowski, P. (Contributor), D'Auria, S. (Contributor), D'Onofrio, M. (Contributor), De Sousa, M. J. D. C. S. (Contributor), Davia, C. (Contributor), Dabrowski, W. (Contributor), Dafinca, A. (Contributor), Dai, T. (Contributor), Dale, O. (Contributor), Dallaire, F. (Contributor), Dallapiccola, C. (Contributor), Dam, M. (Contributor), Dandoy, J. R. (Contributor), Dang, N. P. (Contributor), Daniells, A. C. (Contributor), Danninger, M. (Contributor), Dano Hoffmann, H. M. (Contributor), Dao, V. (Contributor), Darbo, G. (Contributor), Darmora, S. (Contributor), Dassoulas, J. (Contributor), Dattagupta, A. (Contributor), Davey, W. (Contributor), David, C. (Contributor), Davidek, T. (Contributor), Davies, E. (Contributor), Davies, M. (Contributor), Davison, P. (Contributor), Davygora, Y. (Contributor), Dawe, E. (Contributor), Dawson, I. (Contributor), Daya-Ishmukhametova, R. K. (Contributor), De, K. (Contributor), De Asmundis, A. R. (Contributor), De Benedetti, B. A. (Contributor), De Castro, C. S. (Contributor), De Cecco, C. S. (Contributor), De Groot, G. N. (Contributor), De Jong, J. P. (Contributor), Delatorre, H. (Contributor), De Lorenzi, L. F. (Contributor), De Pedis, P. D. (Contributor), De Salvo, S. A. (Contributor), De Sanctis, S. U. (Contributor), De Santo, S. A. (Contributor), Deviviederegie, J. B. (Contributor), Dearnaley, W. J. (Contributor), Debbe, R. (Contributor), Debenedetti, C. (Contributor), Dedovich, D. V. (Contributor), Deigaard, I. (Contributor), Del Peso, P. J. (Contributor), Del Prete, P. T. (Contributor), Delgove, D. (Contributor), Deliot, F. (Contributor), Delitzsch, C. M. (Contributor), Deliyergiyev, M. (Contributor), Dell'Acqua, A. (Contributor), Dell'Asta, L. (Contributor), Dell'Orso, M. (Contributor), Della Pietra, P. M. (Contributor), Della Volpe, V. D. (Contributor), Delmastro, M. (Contributor), Delsart, P. A. (Contributor), Deluca, C. (Contributor), Demarco, D. A. (Contributor), Demers, S. (Contributor), Demichev, M. (Contributor), Demilly, A. (Contributor), Denisov, S. P. (Contributor), Derendarz, D. (Contributor), Derkaoui, J. E. (Contributor), Derue, F. (Contributor), Dervan, P. (Contributor), Desch, K. (Contributor), Deterre, C. (Contributor), Deviveiros, P. O. (Contributor), Dewhurst, A. (Contributor), Dhaliwal, S. (Contributor), Di Ciaccio, C. A. (Contributor), Di Ciaccio, C. L. (Contributor), Di Domenico, D. A. (Contributor), Di Donato, D. C. (Contributor), Di Girolamo, G. A. (Contributor), Di Girolamo, G. B. (Contributor), Di Mattia, M. A. (Contributor), Di Micco, M. B. (Contributor), Di Nardo, N. R. (Contributor), Di Simone, S. A. (Contributor), Di Sipio, S. R. (Contributor), Di Valentino, V. D. (Contributor), Diaconu, C. (Contributor), Diamond, M. (Contributor), Dias, F. A. (Contributor), Diaz, M. A. (Contributor), Diehl, E. B. (Contributor), Dietrich, J. (Contributor), Diglio, S. (Contributor), Dimitrievska, A. (Contributor), Dingfelder, J. (Contributor), Dita, P. (Contributor), Dita, S. (Contributor), Dittus, F. (Contributor), Djama, F. (Contributor), Djobava, T. (Contributor), Djuvsland, J. I. (Contributor), Do Vale, V. M. A. B. (Contributor), Dobos, D. (Contributor), Dobre, M. (Contributor), Doglioni, C. (Contributor), Dohmae, T. (Contributor), Dolejsi, J. (Contributor), Dolezal, Z. (Contributor), Dolgoshein, B. A. (Contributor), Donadelli, M. (Contributor), Donati, S. (Contributor), Dondero, P. (Contributor), Donini, J. (Contributor), Dopke, J. (Contributor), Doria, A. (Contributor), Dova, M. T. (Contributor), Doyle, A. T. (Contributor), Drechsler, E. (Contributor), Dris, M. (Contributor), Dubreuil, E. (Contributor), Duchovni, E. (Contributor), Duckeck, G. (Contributor), Ducu, O. A. (Contributor), Duda, D. (Contributor), Dudarev, A. (Contributor), Duflot, L. (Contributor), Duguid, L. (Contributor), Dührssen, M. (Contributor), Dunford, M. (Contributor), Zwalinski, L. (Contributor), Düren, M. (Contributor), Durglishvili, A. (Contributor), Duschinger, D. (Contributor), Dyndal, M. (Contributor), Eckardt, C. (Contributor), Ecker, K. M. (Contributor), Edgar, R. C. (Contributor), Edson, W. (Contributor), Edwards, N. C. (Contributor), Ehrenfeld, W. (Contributor), Eifert, T. (Contributor), Eigen, G. (Contributor), Einsweiler, K. (Contributor), Ekelof, T. (Contributor), El Kacimi, K. M. (Contributor), Ellert, M. (Contributor), Elles, S. (Contributor), Ellinghaus, F. (Contributor), Elliot, A. A. (Contributor), Ellis, N. (Contributor), Elmsheuser, J. (Contributor), Elsing, M. (Contributor), Emeliyanov, D. (Contributor), Enari, Y. (Contributor), Endner, O. C. (Contributor), Endo, M. (Contributor), Erdmann, J. (Contributor), Ereditato, A. (Contributor), Ernis, G. (Contributor), Ernst, J. (Contributor), Ernst, M. (Contributor), Errede, S. (Contributor), Ertel, E. (Contributor), Escalier, M. (Contributor), Esch, H. (Contributor), Escobar, C. (Contributor), Esposito, B. (Contributor), Etienvre, A. I. (Contributor), Etzion, E. (Contributor), Evans, H. (Contributor), Ezhilov, A. (Contributor), Fabbri, L. (Contributor), Facini, G. (Contributor), Fakhrutdinov, R. M. (Contributor), Falciano, S. (Contributor), Falla, R. J. (Contributor), Faltova, J. (Contributor), Fang, Y. (Contributor), Fanti, M. (Contributor), Farbin, A. (Contributor), Farilla, A. (Contributor), Farooque, T. (Contributor), Farrell, S. (Contributor), Farrington, S. M. (Contributor), Farthouat, P. (Contributor), Fassi, F. (Contributor), Fassnacht, P. (Contributor), Fassouliotis, D. (Contributor), Faucci Giannelli, G. M. (Contributor), Favareto, A. (Contributor), Fayard, L. (Contributor), Federic, P. (Contributor), Fedin, O. L. (Contributor), Fedorko, W. (Contributor), Feigl, S. (Contributor), Feligioni, L. (Contributor), Feng, C. (Contributor), Feng, E. J. (Contributor), Feng, H. (Contributor), Fenyuk, A. B. (Contributor), Feremenga, L. (Contributor), Fernandez Martinez, M. P. (Contributor), Fernandez Perez, P. S. (Contributor), Ferrando, J. (Contributor), Ferrari, A. (Contributor), Ferrari, P. (Contributor), Ferrari, R. (Contributor), Ferreira De Lima, D. L. D. E. (Contributor), Ferrer, A. (Contributor), Ferrere, D. (Contributor), Ferretti, C. (Contributor), Ferretto Parodi, P. A. (Contributor), Fiascaris, M. (Contributor), Fiedler, F. (Contributor), Filipčič, A. (Contributor), Filipuzzi, M. (Contributor), Filthaut, F. (Contributor), Fincke-Keeler, M. (Contributor), Finelli, K. D. (Contributor), Fiolhais, M. C. N. (Contributor), Fiorini, L. (Contributor), Firan, A. (Contributor), Fischer, A. (Contributor), Fischer, C. (Contributor), Fischer, J. (Contributor), Fisher, W. C. (Contributor), Fitzgerald, E. A. (Contributor), Flaschel, N. (Contributor), Fleck, I. (Contributor), Fleischmann, P. (Contributor), Fleischmann, S. (Contributor), Fletcher, G. T. (Contributor), Fletcher, G. (Contributor), Fletcher, R. R. M. (Contributor), Flick, T. (Contributor), Floderus, A. (Contributor), Flores Castillo, C. L. R. (Contributor), Flowerdew, M. J. (Contributor), Formica, A. (Contributor), Forti, A. (Contributor), Fournier, D. (Contributor), Fox, H. (Contributor), Fracchia, S. (Contributor), Francavilla, P. (Contributor), Franchini, M. (Contributor), Francis, D. (Contributor), Franconi, L. (Contributor), Franklin, M. (Contributor), Frate, M. (Contributor), Fraternali, M. (Contributor), Freeborn, D. (Contributor), French, S. T. (Contributor), Friedrich, F. (Contributor), Froidevaux, D. (Contributor), Frost, J. A. (Contributor), Fukunaga, C. (Contributor), Fullana Torregrosa, T. E. (Contributor), Fulsom, B. G. (Contributor), Fusayasu, T. (Contributor), Fuster, J. (Contributor), Gabaldon, C. (Contributor), Gabizon, O. (Contributor), Gabrielli, A. (Contributor), Gabrielli, A. (Contributor), Gach, G. P. (Contributor), Gadatsch, S. (Contributor), Gadomski, S. (Contributor), Gagliardi, G. (Contributor), Gagnon, P. (Contributor), Galea, C. (Contributor), Galhardo, B. (Contributor), Gallas, E. J. (Contributor), Gallop, B. J. (Contributor), Gallus, P. (Contributor), Galster, G. (Contributor), Gan, K. K. (Contributor), Gao, J. (Contributor), Gao, Y. (Contributor), Gao, Y. S. (Contributor), Garay Walls, W. F. M. (Contributor), Garberson, F. (Contributor), García, C. (Contributor), García Navarro, N. J. E. (Contributor), Garcia-Sciveres, M. (Contributor), Gardner, R. W. (Contributor), Garelli, N. (Contributor), Garonne, V. (Contributor), Gatti, C. (Contributor), Gaudiello, A. (Contributor), Gaudio, G. (Contributor), Gaur, B. (Contributor), Gauthier, L. (Contributor), Gauzzi, P. (Contributor), Gavrilenko, I. L. (Contributor), Gay, C. (Contributor), Gaycken, G. (Contributor), Gazis, E. N. (Contributor), Ge, P. (Contributor), Gecse, Z. (Contributor), Gee, C. N. P. (Contributor), Geich-Gimbel, C. H. (Contributor), Geisler, M. P. (Contributor), Gemme, C. (Contributor), Genest, M. H. (Contributor), Gentile, S. (Contributor), George, M. (Contributor), George, S. (Contributor), Gerbaudo, D. (Contributor), Gershon, A. (Contributor), Ghasemi, S. (Contributor), Ghazlane, H. (Contributor), Giacobbe, B. (Contributor), Giagu, S. (Contributor), Giangiobbe, V. (Contributor), Giannetti, P. (Contributor), Gibbard, B. (Contributor), Gibson, S. M. (Contributor), Gilchriese, M. (Contributor), Gillam, T. P. S. (Contributor), Gillberg, D. (Contributor), Gilles, G. (Contributor), Gingrich, D. M. (Contributor), Giokaris, N. (Contributor), Giordani, M. P. (Contributor), Giorgi, F. M. (Contributor), Giorgi, F. M. (Contributor), Giraud, P. F. (Contributor), Giromini, P. (Contributor), Giugni, D. (Contributor), Giuliani, C. (Contributor), Giulini, M. (Contributor), Gjelsten, B. K. (Contributor), Gkaitatzis, S. (Contributor), Gkialas, I. (Contributor), Gkougkousis, E. L. (Contributor), Gladilin, L. K. (Contributor), Glasman, C. (Contributor), Glatzer, J. (Contributor), Glaysher, P. C. F. (Contributor), Glazov, A. (Contributor), Goblirsch-Kolb, M. (Contributor), Goddard, J. R. (Contributor), Godlewski, J. (Contributor), Goldfarb, S. (Contributor), Golling, T. (Contributor), Golubkov, D. (Contributor), Gomes, A. (Contributor), Gonçalo, R. (Contributor), Goncalves Pinto Firmino Da Costa, P. F. D. C. J. (Contributor), Gonella, L. (Contributor), González De La Hoz, D. L. H. S. (Contributor), Gonzalez Parra, P. G. (Contributor), Gonzalez-Sevilla, S. (Contributor), Goossens, L. (Contributor), Gorbounov, P. A. (Contributor), Gordon, H. A. (Contributor), Gorelov, I. (Contributor), Gorini, B. (Contributor), Gorini, E. (Contributor), Gorišek, A. (Contributor), Gornicki, E. (Contributor), Goshaw, A. T. (Contributor), Gössling, C. (Contributor), Gostkin, M. I. (Contributor), Goujdami, D. (Contributor), Goussiou, A. G. (Contributor), Govender, N. (Contributor), Gozani, E. (Contributor), Grabas, H. M. X. (Contributor), Graber, L. (Contributor), Grabowska-Bold, I. (Contributor), Gradin, P. O. J. (Contributor), Grafström, P. (Contributor), Grahn, K. (Contributor), Gramling, J. (Contributor), Gramstad, E. (Contributor), Grancagnolo, S. (Contributor), Gratchev, V. (Contributor), Gray, H. M. (Contributor), Graziani, E. (Contributor), Greenwood, Z. D. (Contributor), Grefe, C. (Contributor), Gregersen, K. (Contributor), Gregor, I. M. (Contributor), Grenier, P. (Contributor), Griffiths, J. (Contributor), Grillo, A. A. (Contributor), Grimm, K. (Contributor), Grinstein, S. (Contributor), Gris, P. H. (Contributor), Grivaz, J. (Contributor), Grohs, J. P. (Contributor), Grohsjean, A. (Contributor), Gross, E. (Contributor), Grosse-Knetter, J. (Contributor), Grossi, G. C. (Contributor), Grout, Z. J. (Contributor), Guan, L. (Contributor), Guenther, J. (Contributor), Guescini, F. (Contributor), Guest, D. (Contributor), Gueta, O. (Contributor), Guido, E. (Contributor), Guillemin, T. (Contributor), Guindon, S. (Contributor), Gul, U. (Contributor), Gumpert, C. (Contributor), Guo, J. (Contributor), Guo, Y. (Contributor), Gupta, S. (Contributor), Gustavino, G. (Contributor), Gutierrez, P. (Contributor), Gutierrez Ortiz, O. N. G. (Contributor), Gutschow, C. (Contributor), Guyot, C. (Contributor), Gwenlan, C. (Contributor), Gwilliam, C. B. (Contributor), Haas, A. (Contributor), Haber, C. (Contributor), Hadavand, H. K. (Contributor), Haddad, N. (Contributor), Haefner, P. (Contributor), Hageböck, S. (Contributor), Hajduk, Z. (Contributor), Hakobyan, H. (Contributor), Haleem, M. (Contributor), Haley, J. (Contributor), Hall, D. (Contributor), Halladjian, G. (Contributor), Hallewell, G. D. (Contributor), Hamacher, K. (Contributor), Hamal, P. (Contributor), Hamano, K. (Contributor), Hamilton, A. (Contributor), Hamity, G. N. (Contributor), Hamnett, P. G. (Contributor), Han, L. (Contributor), Hanagaki, K. (Contributor), Hanawa, K. (Contributor), Hance, M. (Contributor), Hanke, P. (Contributor), Hanna, R. (Contributor), Hansen, J. B. (Contributor), Hansen, J. D. (Contributor), Hansen, M. C. (Contributor), Hansen, P. H. (Contributor), Hara, K. (Contributor), Hard, A. S. (Contributor), Harenberg, T. (Contributor), Hariri, F. (Contributor), Harkusha, S. (Contributor), Harrington, R. D. (Contributor), Harrison, P. F. (Contributor), Hartjes, F. (Contributor), Hasegawa, M. (Contributor), Hasegawa, Y. (Contributor), Hasib, A. (Contributor), Hassani, S. (Contributor), Haug, S. (Contributor), Hauser, R. (Contributor), Hauswald, L. (Contributor), Havranek, M. (Contributor), Hawkes, C. M. (Contributor), Hawkings, R. J. (Contributor), Hawkins, A. D. (Contributor), Hayashi, T. (Contributor), Hayden, D. (Contributor), Hays, C. P. (Contributor), Hays, J. M. (Contributor), Hayward, H. S. (Contributor), Haywood, S. J. (Contributor), Head, S. J. (Contributor), Heck, T. (Contributor), Hedberg, V. (Contributor), Heelan, L. (Contributor), Heim, S. (Contributor), Heim, T. (Contributor), Heinemann, B. (Contributor), Heinrich, L. (Contributor), Hejbal, J. (Contributor), Helary, L. (Contributor), Hellman, S. (Contributor), Hellmich, D. (Contributor), Helsens, C. (Contributor), Henderson, J. (Contributor), Henderson, R. C. W. (Contributor), Heng, Y. (Contributor), Hengler, C. (Contributor), Henkelmann, S. (Contributor), Henrichs, A. (Contributor), Henriques Correia, C. A. M. (Contributor), Henrot-Versille, S. (Contributor), Herbert, G. H. (Contributor), Hernández Jiménez, J. Y. (Contributor), Herrberg-Schubert, R. (Contributor), Herten, G. (Contributor), Hertenberger, R. (Contributor), Hervas, L. (Contributor), Hesketh, G. G. (Contributor), Hessey, N. P. (Contributor), Hetherly, J. W. (Contributor), Hickling, R. (Contributor), Higón-Rodriguez, E. (Contributor), Hill, E. (Contributor), Hill, J. C. (Contributor), Hiller, K. H. (Contributor), Hillier, S. J. (Contributor), Hinchliffe, I. (Contributor), Hines, E. (Contributor), Hinman, R. R. (Contributor), Hirose, M. (Contributor), Hirschbuehl, D. (Contributor), Hobbs, J. (Contributor), Hod, N. (Contributor), Hodgkinson, M. C. (Contributor), Hodgson, P. (Contributor), Hoecker, A. (Contributor), Hoeferkamp, M. R. (Contributor), Hoenig, F. (Contributor), Hohlfeld, M. (Contributor), Hohn, D. (Contributor), Holmes, T. R. (Contributor), Homann, M. (Contributor), Hong, T. M. (Contributor), Hooft Van Huysduynen, V. H. L. (Contributor), Hopkins, W. H. (Contributor), Horii, Y. (Contributor), Horton, A. J. (Contributor), Hostachy, J. (Contributor), Hou, S. (Contributor), Hoummada, A. (Contributor), Howard, J. (Contributor), Howarth, J. (Contributor), Hrabovsky, M. (Contributor), Hristova, I. (Contributor), Hrivnac, J. (Contributor), Hryn'Ova, T. (Contributor), Hrynevich, A. (Contributor), Hsu, C. (Contributor), Hsu, P. J. (Contributor), Hsu, S. (Contributor), Hu, D. (Contributor), Hu, Q. (Contributor), Hu, X. (Contributor), Huang, Y. (Contributor), Hubacek, Z. (Contributor), Hubaut, F. (Contributor), Huegging, F. (Contributor), Huffman, T. B. (Contributor), Hughes, E. W. (Contributor), Hughes, G. (Contributor), Huhtinen, M. (Contributor), Hülsing, T. A. (Contributor), Huseynov, N. (Contributor), Huston, J. (Contributor), Huth, J. (Contributor), Iacobucci, G. (Contributor), Iakovidis, G. (Contributor), Ibragimov, I. (Contributor), Iconomidou-Fayard, L. (Contributor), Ideal, E. (Contributor), Idrissi, Z. (Contributor), Iengo, P. (Contributor), Igonkina, O. (Contributor), Iizawa, T. (Contributor), Ikegami, Y. (Contributor), Ikeno, M. (Contributor), Ilchenko, Y. (Contributor), Iliadis, D. (Contributor), Ilic, N. (Contributor), Ince, T. (Contributor), Introzzi, G. (Contributor), Ioannou, P. (Contributor), Iodice, M. (Contributor), Iordanidou, K. (Contributor), Ippolito, V. (Contributor), Irles Quiles, Q. A. (Contributor), Isaksson, C. (Contributor), Ishino, M. (Contributor), Ishitsuka, M. (Contributor), Ishmukhametov, R. (Contributor), Issever, C. (Contributor), Istin, S. (Contributor), Iturbe Ponce, P. J. M. (Contributor), Iuppa, R. (Contributor), Ivarsson, J. (Contributor), Iwanski, W. (Contributor), Iwasaki, H. (Contributor), Izen, J. M. (Contributor), Izzo, V. (Contributor), Jabbar, S. (Contributor), Jackson, B. (Contributor), Jackson, M. (Contributor), Jackson, P. (Contributor), Jaekel, M. R. (Contributor), Jain, V. (Contributor), Jakobs, K. (Contributor), Jakobsen, S. (Contributor), Jakoubek, T. (Contributor), Jakubek, J. (Contributor), Jamin, D. O. (Contributor), Jana, D. K. (Contributor), Jansen, E. (Contributor), Jansky, R. (Contributor), Janssen, J. (Contributor), Janus, M. (Contributor), Jarlskog, G. (Contributor), Javadov, N. (Contributor), Javůrek, T. (Contributor), Jeanty, L. (Contributor), Jejelava, J. (Contributor), Jeng, G. (Contributor), Jennens, D. (Contributor), Jenni, P. (Contributor), Jentzsch, J. (Contributor), Jeske, C. (Contributor), Jézéquel, S. (Contributor), Ji, H. (Contributor), Jia, J. (Contributor), Jiang, Y. (Contributor), Jiggins, S. (Contributor), Jimenez Pena, P. J. (Contributor), Jin, S. (Contributor), Jinaru, A. (Contributor), Jinnouchi, O. (Contributor), Joergensen, M. D. (Contributor), Johansson, P. (Contributor), Johns, K. A. (Contributor), Jon-And, K. (Contributor), Jones, G. (Contributor), Jones, R. W. L. (Contributor), Jones, T. J. (Contributor), Jongmanns, J. (Contributor), Jorge, P. M. (Contributor), Joshi, K. D. (Contributor), Jovicevic, J. (Contributor), Ju, X. (Contributor), Jung, C. A. (Contributor), Jussel, P. (Contributor), Juste Rozas, R. A. (Contributor), Kaci, M. (Contributor), Kaczmarska, A. (Contributor), Kado, M. (Contributor), Kagan, H. (Contributor), Kagan, M. (Contributor), Kahn, S. J. (Contributor), Kajomovitz, E. (Contributor), Kalderon, C. W. (Contributor), Kama, S. (Contributor), Kamenshchikov, A. (Contributor), Kanaya, N. (Contributor), Kaneti, S. (Contributor), Kantserov, V. A. (Contributor), Kanzaki, J. (Contributor), Kaplan, B. (Contributor), Kaplan, L. S. (Contributor), Kapliy, A. (Contributor), Kar, D. (Contributor), Karakostas, K. (Contributor), Karamaoun, A. (Contributor), Karastathis, N. (Contributor), Kareem, M. J. (Contributor), Karentzos, E. (Contributor), Karnevskiy, M. (Contributor), Karpov, S. N. (Contributor), Karpova, Z. M. (Contributor), Karthik, K. (Contributor), Kartvelishvili, V. (Contributor), Karyukhin, A. N. (Contributor), Kashif, L. (Contributor), Kass, R. D. (Contributor), Kastanas, A. (Contributor), Kataoka, Y. (Contributor), Kato, C. (Contributor), Katre, A. (Contributor), Katzy, J. (Contributor), Kawagoe, K. (Contributor), Kawamoto, T. (Contributor), Kawamura, G. (Contributor), Kazama, S. (Contributor), Kazanin, V. F. (Contributor), Keeler, R. (Contributor), Kehoe, R. (Contributor), Keller, J. S. (Contributor), Kempster, J. J. (Contributor), Keoshkerian, H. (Contributor), Kepka, O. (Contributor), Kerševan, B. P. (Contributor), Kersten, S. (Contributor), Keyes, R. A. (Contributor), Khalil-Zada, F. (Contributor), Khandanyan, H. (Contributor), Khanov, A. (Contributor), Kharlamov, A. G. (Contributor), Khoo, T. J. (Contributor), Khovanskiy, V. (Contributor), Khramov, E. (Contributor), Khubua, J. (Contributor), Kido, S. (Contributor), Kim, H. Y. (Contributor), Kim, S. H. (Contributor), Kim, Y. K. (Contributor), Kimura, N. (Contributor), Kind, O. M. (Contributor), King, B. T. (Contributor), King, M. (Contributor), King, S. B. (Contributor), Kirk, J. (Contributor), Kiryunin, A. E. (Contributor), Kishimoto, T. (Contributor), Kisielewska, D. (Contributor), Kiss, F. (Contributor), Kiuchi, K. (Contributor), Kivernyk, O. (Contributor), Kladiva, E. (Contributor), Klein, M. H. (Contributor), Klein, M. (Contributor), Klein, U. (Contributor), Kleinknecht, K. (Contributor), Klimek, P. (Contributor), Klimentov, A. (Contributor), Klingenberg, R. (Contributor), Klinger, J. A. (Contributor), Klioutchnikova, T. (Contributor), Kluge, E. (Contributor), Kluit, P. (Contributor), Kluth, S. (Contributor), Knapik, J. (Contributor), Kneringer, E. (Contributor), Knoops, E. B. F. G. (Contributor), Knue, A. (Contributor), Kobayashi, A. (Contributor), Kobayashi, D. (Contributor), Kobayashi, T. (Contributor), Kobel, M. (Contributor), Kocian, M. (Contributor), Kodys, P. (Contributor), Koffas, T. (Contributor), Koffeman, E. (Contributor), Kogan, L. A. (Contributor), Kohlmann, S. (Contributor), Kohout, Z. (Contributor), Kohriki, T. (Contributor), Koi, T. (Contributor), Kolanoski, H. (Contributor), Koletsou, I. (Contributor), Komar, A. A. (Contributor), Komori, Y. (Contributor), Kondo, T. (Contributor), Kondrashova, N. (Contributor), Köneke, K. (Contributor), König, A. C. (Contributor), Kono, T. (Contributor), Konoplich, R. (Contributor), Konstantinidis, N. (Contributor), Kopeliansky, R. (Contributor), Koperny, S. (Contributor), Köpke, L. (Contributor), Kopp, A. K. (Contributor), Korcyl, K. (Contributor), Kordas, K. (Contributor), Korn, A. (Contributor), Korol, A. A. (Contributor), Korolkov, I. (Contributor), Korolkova, E. V. (Contributor), Kortner, O. (Contributor), Kortner, S. (Contributor), Kosek, T. (Contributor), Kostyukhin, V. V. (Contributor), Kotov, V. M. (Contributor), Kotwal, A. (Contributor), Kourkoumeli-Charalampidi, A. (Contributor), Kourkoumelis, C. (Contributor), Kouskoura, V. (Contributor), Koutsman, A. (Contributor), Kowalewski, R. (Contributor), Kowalski, T. Z. (Contributor), Kozanecki, W. (Contributor), Kozhin, A. S. (Contributor), Kramarenko, V. A. (Contributor), Kramberger, G. (Contributor), Krasnopevtsev, D. (Contributor), Krasny, M. W. (Contributor), Krasznahorkay, A. (Contributor), Kraus, J. K. (Contributor), Kravchenko, A. (Contributor), Kreiss, S. (Contributor), Kretz, M. (Contributor), Kretzschmar, J. (Contributor), Kreutzfeldt, K. (Contributor), Krieger, P. (Contributor), Krizka, K. (Contributor), Kroeninger, K. (Contributor), Kroha, H. (Contributor), Kroll, J. (Contributor), Kroseberg, J. (Contributor), Krstic, J. (Contributor), Kruchonak, U. (Contributor), Krüger, H. (Contributor), Krumnack, N. (Contributor), Kruse, A. (Contributor), Kruse, M. C. (Contributor), Kruskal, M. (Contributor), Kubota, T. (Contributor), Kucuk, H. (Contributor), Kuday, S. (Contributor), Kuehn, S. (Contributor), Kugel, A. (Contributor), Kuger, F. (Contributor), Kuhl, A. (Contributor), Kuhl, T. (Contributor), Kukhtin, V. (Contributor), Kukla, R. (Contributor), Kulchitsky, Y. (Contributor), Kuleshov, S. (Contributor), Kuna, M. (Contributor), Kunigo, T. (Contributor), Kupco, A. (Contributor), Kurashige, H. (Contributor), Kurochkin, Y. A. (Contributor), Kus, V. (Contributor), Kuwertz, E. S. (Contributor), Kuze, M. (Contributor), Kvita, J. (Contributor), Kwan, T. (Contributor), Kyriazopoulos, D. (Contributor), Larosa, A. (Contributor), La Rosa Navarro, R. N. J. L. (Contributor), La Rotonda, R. L. (Contributor), Lacasta, C. (Contributor), Lacava, F. (Contributor), Lacey, J. (Contributor), Lacker, H. (Contributor), Lacour, D. (Contributor), Lacuesta, V. R. (Contributor), Ladygin, E. (Contributor), Lafaye, R. (Contributor), Laforge, B. (Contributor), Lagouri, T. (Contributor), Lai, S. (Contributor), Lambourne, L. (Contributor), Lammers, S. (Contributor), Lampen, C. L. (Contributor), Lampl, W. (Contributor), Lançon, E. (Contributor), Landgraf, U. (Contributor), Landon, M. P. J. (Contributor), Lang, V. S. (Contributor), Lange, J. C. (Contributor), Lankford, A. J. (Contributor), Lanni, F. (Contributor), Lantzsch, K. (Contributor), Lanza, A. (Contributor), Laplace, S. (Contributor), Lapoire, C. (Contributor), Laporte, J. F. (Contributor), Lari, T. (Contributor), Lasagni Manghi, M. F. (Contributor), Lassnig, M. (Contributor), Laurelli, P. (Contributor), Lavrijsen, W. (Contributor), Law, A. T. (Contributor), Laycock, P. (Contributor), Lazovich, T. (Contributor), Ledortz, O. (Contributor), Leguirriec, E. (Contributor), Le Menedeu, M. E. (Contributor), Leblanc, M. (Contributor), Lecompte, T. (Contributor), Ledroit-Guillon, F. (Contributor), Lee, C. A. (Contributor), Lee, S. C. (Contributor), Lee, L. (Contributor), Lefebvre, G. (Contributor), Lefebvre, M. (Contributor), Legger, F. (Contributor), Leggett, C. (Contributor), Lehan, A. (Contributor), Lehmann Miotto, M. G. (Contributor), Lei, X. (Contributor), Leight, W. A. (Contributor), Leisos, A. (Contributor), Leister, A. G. (Contributor), Leite, M. A. L. (Contributor), Leitner, R. (Contributor), Lellouch, D. (Contributor), Lemmer, B. (Contributor), Leney, K. J. C. (Contributor), Lenz, T. (Contributor), Lenzi, B. (Contributor), Leone, R. (Contributor), Leone, S. (Contributor), Leonidopoulos, C. (Contributor), Leontsinis, S. (Contributor), Leroy, C. (Contributor), Lester, C. G. (Contributor), Levchenko, M. (Contributor), Levêque, J. (Contributor), Levin, D. (Contributor), Levinson, L. J. (Contributor), Levy, M. (Contributor), Lewis, A. (Contributor), Leyko, A. M. (Contributor), Leyton, M. (Contributor), Li, B. (Contributor), Li, H. (Contributor), Li, H. L. (Contributor), Li, L. (Contributor), Li, L. (Contributor), Li, S. (Contributor), Li, X. (Contributor), Li, Y. (Contributor), Liang, Z. (Contributor), Liao, H. (Contributor), Liberti, B. (Contributor), Liblong, A. (Contributor), Lichard, P. (Contributor), Lie, K. (Contributor), Liebal, J. (Contributor), Liebig, W. (Contributor), Limbach, C. (Contributor), Limosani, A. (Contributor), Lin, S. C. (Contributor), Lin, T. H. (Contributor), Linde, F. (Contributor), Lindquist, B. E. (Contributor), Linnemann, J. T. (Contributor), Lipeles, E. (Contributor), Lipniacka, A. (Contributor), Lisovyi, M. (Contributor), Liss, T. M. (Contributor), Lissauer, D. (Contributor), Lister, A. (Contributor), Litke, A. M. (Contributor), Liu, B. (Contributor), Liu, D. (Contributor), Liu, H. (Contributor), Liu, J. (Contributor), Liu, J. B. (Contributor), Liu, K. (Contributor), Liu, L. (Contributor), Liu, M. (Contributor), Liu, M. (Contributor), Liu, Y. (Contributor), Livan, M. (Contributor), Lleres, A. (Contributor), Llorente Merino, M. J. (Contributor), Lloyd, S. L. (Contributor), Losterzo, F. (Contributor), Lobodzinska, E. (Contributor), Loch, P. (Contributor), Lockman, W. S. (Contributor), Loebinger, F. K. (Contributor), Loevschall-Jensen, A. E. (Contributor), Loew, K. M. (Contributor), Loginov, A. (Contributor), Lohse, T. (Contributor), Lohwasser, K. (Contributor), Lokajicek, M. (Contributor), Long, B. A. (Contributor), Long, J. D. (Contributor), Long, R. E. (Contributor), Looper, K. A. (Contributor), Lopes, L. (Contributor), Lopez Mateos, M. D. (Contributor), Lopez Paredes, P. B. (Contributor), Lopez Paz, P. I. (Contributor), Lorenz, J. (Contributor), Lorenzo Martinez, M. N. (Contributor), Losada, M. (Contributor), Lösel, P. J. (Contributor), Lou, X. (Contributor), Lounis, A. (Contributor), Love, J. (Contributor), Love, P. A. (Contributor), Lu, N. (Contributor), Lubatti, H. J. (Contributor), Luci, C. (Contributor), Lucotte, A. (Contributor), Luedtke, C. (Contributor), Luehring, F. (Contributor), Lukas, W. (Contributor), Luminari, L. (Contributor), Lundberg, O. (Contributor), Lund-Jensen, B. (Contributor), Lynn, D. (Contributor), Lysak, R. (Contributor), Lytken, E. (Contributor), Ma, H. (Contributor), Ma, L. L. (Contributor), Maccarrone, G. (Contributor), Macchiolo, A. (Contributor), Macdonald, C. M. (Contributor), Maček, B. (Contributor), Machado Miguens, M. J. (Contributor), Macina, D. (Contributor), Madaffari, D. (Contributor), Madar, R. (Contributor), Maddocks, H. J. (Contributor), Mader, W. F. (Contributor), Madsen, A. (Contributor), Maeda, J. (Contributor), Maeland, S. (Contributor), Maeno, T. (Contributor), Maevskiy, A. (Contributor), Magradze, E. (Contributor), Mahboubi, K. (Contributor), Mahlstedt, J. (Contributor), Maiani, C. (Contributor), Maidantchik, C. (Contributor), Maier, A. A. (Contributor), Maier, T. (Contributor), Maio, A. (Contributor), Majewski, S. (Contributor), Makida, Y. (Contributor), Makovec, N. (Contributor), Malaescu, B. (Contributor), Malecki, P. A. (Contributor), Maleev, V. P. (Contributor), Malek, F. (Contributor), Mallik, U. (Contributor), Malon, D. (Contributor), Malone, C. (Contributor), Maltezos, S. (Contributor), Malyshev, V. M. (Contributor), Malyukov, S. (Contributor), Mamuzic, J. (Contributor), Mancini, G. (Contributor), Mandelli, B. (Contributor), Mandelli, L. (Contributor), Mandić, I. (Contributor), Mandrysch, R. (Contributor), Maneira, J. (Contributor), Manfredini, A. (Contributor), Manhaes De Andrade Filho, D. A. F. L. (Contributor), Manjarres Ramos, R. J. (Contributor), Mann, A. (Contributor), Manousakis-Katsikakis, A. (Contributor), Mansoulie, B. (Contributor), Mantifel, R. (Contributor), Mantoani, M. (Contributor), Mapelli, L. (Contributor), March, L. (Contributor), Marchiori, G. (Contributor), Marcisovsky, M. (Contributor), Marino, C. P. (Contributor), Marjanovic, M. (Contributor), Marley, D. E. (Contributor), Marroquim, F. (Contributor), Marsden, S. P. (Contributor), Marshall, Z. (Contributor), Marti, L. F. (Contributor), Marti-Garcia, S. (Contributor), Martin, B. (Contributor), Martin, T. A. (Contributor), Martin, V. J. (Contributor), Martin Dit Latour, D. L. B. (Contributor), Martinez, M. (Contributor), Martin-Haugh, S. (Contributor), Martoiu, V. S. (Contributor), Martyniuk, A. C. (Contributor), Marx, M. (Contributor), Marzano, F. (Contributor), Marzin, A. (Contributor), Masetti, L. (Contributor), Mashimo, T. (Contributor), Mashinistov, R. (Contributor), Masik, J. (Contributor), Maslennikov, A. L. (Contributor), Massa, I. (Contributor), Massa, L. (Contributor), Mastrandrea, P. (Contributor), Mastroberardino, A. (Contributor), Masubuchi, T. (Contributor), Mättig, P. (Contributor), Mattmann, J. (Contributor), Maurer, J. (Contributor), Maxfield, S. J. (Contributor), Maximov, D. A. (Contributor), Mazini, R. (Contributor), Mazza, S. M. (Contributor), Mazzaferro, L. (Contributor), Mc Goldrick, G. G. (Contributor), McKee, S. P. (Contributor), McCarn, A. (Contributor), McCarthy, R. L. (Contributor), McCarthy, T. G. (Contributor), McCubbin, N. A. (Contributor), McFarlane, K. W. (Contributor), McFayden, J. A. (Contributor), McHedlidze, G. (Contributor), McMahon, S. J. (Contributor), McPherson, R. A. (Contributor), Medinnis, M. (Contributor), Meehan, S. (Contributor), Mehlhase, S. (Contributor), Mehta, A. (Contributor), Meier, K. (Contributor), Meineck, C. (Contributor), Meirose, B. (Contributor), Mellado Garcia, G. B. R. (Contributor), Meloni, F. (Contributor), Mengarelli, A. (Contributor), Menke, S. (Contributor), Meoni, E. (Contributor), Mercurio, K. M. (Contributor), Mergelmeyer, S. (Contributor), Mermod, P. (Contributor), Merola, L. (Contributor), Meroni, C. (Contributor), Merritt, F. S. (Contributor), Messina, A. (Contributor), Metcalfe, J. (Contributor), Mete, A. S. (Contributor), Meyer, C. (Contributor), Meyer, C. (Contributor), Meyer, J. (Contributor), Meyer, J. (Contributor), Meyer Zu Theenhausen, Z. T. H. (Contributor), Middleton, R. P. (Contributor), Miglioranzi, S. (Contributor), Mijović, L. (Contributor), Mikenberg, G. (Contributor), Mikestikova, M. (Contributor), Mikuž, M. (Contributor), Milesi, M. (Contributor), Milic, A. (Contributor), Miller, D. W. (Contributor), Mills, C. (Contributor), Milov, A. (Contributor), Milstead, D. A. (Contributor), Minaenko, A. A. (Contributor), Minami, Y. (Contributor), Minashvili, I. A. (Contributor), Mincer, A. I. (Contributor), Mindur, B. (Contributor), Mineev, M. (Contributor), Ming, Y. (Contributor), Mir, L. M. (Contributor), Mitani, T. (Contributor), Mitrevski, J. (Contributor), Mitsou, V. A. (Contributor), Miucci, A. (Contributor), Miyagawa, P. S. (Contributor), Mjörnmark, J. U. (Contributor), Moa, T. (Contributor), Mochizuki, K. (Contributor), Mohapatra, S. (Contributor), Mohr, W. (Contributor), Molander, S. (Contributor), Moles-Valls, R. (Contributor), Monden, R. (Contributor), Mönig, K. (Contributor), Monini, C. (Contributor), Monk, J. (Contributor), Monnier, E. (Contributor), Montejo Berlingen, B. J. (Contributor), Monticelli, F. (Contributor), Monzani, S. (Contributor), Moore, R. W. (Contributor), Morange, N. (Contributor), Moreno, D. (Contributor), Moreno Llácer, L. M. (Contributor), Morettini, P. (Contributor), Mori, D. (Contributor), Morii, M. (Contributor), Morinaga, M. (Contributor), Morisbak, V. (Contributor), Moritz, S. (Contributor), Morley, A. K. (Contributor), Mornacchi, G. (Contributor), Morris, J. D. (Contributor), Mortensen, S. S. (Contributor), Morton, A. (Contributor), Morvaj, L. (Contributor), Mosidze, M. (Contributor), Moss, J. (Contributor), Motohashi, K. (Contributor), Mount, R. (Contributor), Mountricha, E. (Contributor), Mouraviev, S. V. (Contributor), Moyse, E. J. W. (Contributor), Muanza, S. (Contributor), Mudd, R. D. (Contributor), Mueller, F. (Contributor), Mueller, J. (Contributor), Mueller, R. S. P. (Contributor), Mueller, T. (Contributor), Muenstermann, D. (Contributor), Mullen, P. (Contributor), Mullier, G. A. (Contributor), Murillo Quijada, Q. J. A. (Contributor), Murray, W. J. (Contributor), Musheghyan, H. (Contributor), Musto, E. (Contributor), Myagkov, A. G. (Contributor), Myska, M. (Contributor), Nachman, B. P. (Contributor), Nackenhorst, O. (Contributor), Nadal, J. (Contributor), Nagai, K. (Contributor), Nagai, R. (Contributor), Nagai, Y. (Contributor), Nagano, K. (Contributor), Nagarkar, A. (Contributor), Nagasaka, Y. (Contributor), Nagata, K. (Contributor), Nagel, M. (Contributor), Nagy, E. (Contributor), Nairz, A. M. (Contributor), Nakahama, Y. (Contributor), Nakamura, K. (Contributor), Nakamura, T. (Contributor), Nakano, I. (Contributor), Namasivayam, H. (Contributor), Naranjo Garcia, G. R. F. (Contributor), Narayan, R. (Contributor), Narrias Villar, V. D. I. (Contributor), Naumann, T. (Contributor), Navarro, G. (Contributor), Nayyar, R. (Contributor), Neal, H. A. (Contributor), Yu. Nechaeva, N. P. (Contributor), Neep, T. J. (Contributor), Nef, P. D. (Contributor), Negri, A. (Contributor), Negrini, M. (Contributor), Nektarijevic, S. (Contributor), Nellist, C. (Contributor), Nelson, A. (Contributor), Nemecek, S. (Contributor), Nemethy, P. (Contributor), Nepomuceno, A. A. (Contributor), Nessi, M. (Contributor), Neubauer, M. S. (Contributor), Neumann, M. (Contributor), Neves, R. M. (Contributor), Nevski, P. (Contributor), Newman, P. R. (Contributor), Nguyen, D. H. (Contributor), Nickerson, R. B. (Contributor), Nicolaidou, R. (Contributor), Nicquevert, B. (Contributor), Nielsen, J. (Contributor), Nikiforou, N. (Contributor), Nikiforov, A. (Contributor), Nikolaenko, V. (Contributor), Nikolic-Audit, I. (Contributor), Nikolopoulos, K. (Contributor), Nilsen, J. K. (Contributor), Nilsson, P. (Contributor), Ninomiya, Y. (Contributor), Nisati, A. (Contributor), Nisius, R. (Contributor), Nobe, T. (Contributor), Nodulman, L. (Contributor), Nomachi, M. (Contributor), Nomidis, I. (Contributor), Nooney, T. (Contributor), Norberg, S. (Contributor), Nordberg, M. (Contributor), Novgorodova, O. (Contributor), Nowak, S. (Contributor), Nozaki, M. (Contributor), Nozka, L. (Contributor), Ntekas, K. (Contributor), Nunes Hanninger, H. G. (Contributor), Nunnemann, T. (Contributor), Nurse, E. (Contributor), Nuti, F. (Contributor), O'Brien, B. J. (Contributor), O'Grady, F. (Contributor), O'Neil, D. C. (Contributor), O'Shea, V. (Contributor), Oakham, F. G. (Contributor), Oberlack, H. (Contributor), Obermann, T. (Contributor), Ocariz, J. (Contributor), Ochi, A. (Contributor), Ochoa, I. (Contributor), Ochoa-Ricoux, J. P. (Contributor), Oda, S. (Contributor), Odaka, S. (Contributor), Ogren, H. (Contributor), Oh, A. (Contributor), Oh, S. H. (Contributor), Ohm, C. C. (Contributor), Ohman, H. (Contributor), Oide, H. (Contributor), Okamura, W. (Contributor), Okawa, H. (Contributor), Okumura, Y. (Contributor), Okuyama, T. (Contributor), Olariu, A. (Contributor), Olivares Pino, P. S. A. (Contributor), Oliveira Damazio, D. D. (Contributor), Oliver Garcia, G. E. (Contributor), Olszewski, A. (Contributor), Olszowska, J. (Contributor), Onofre, A. (Contributor), Onogi, K. (Contributor), Onyisi, P. U. E. (Contributor), Oram, C. J. (Contributor), Oreglia, M. J. (Contributor), Oren, Y. (Contributor), Orestano, D. (Contributor), Orlando, N. (Contributor), Oropeza Barrera, B. C. (Contributor), Orr, R. S. (Contributor), Osculati, B. (Contributor), Ospanov, R. (Contributor), Otero Y Garzon, Y. G. G. (Contributor), Otono, H. (Contributor), Ouchrif, M. (Contributor), Ould-Saada, F. (Contributor), Ouraou, A. (Contributor), Oussoren, K. P. (Contributor), Ouyang, Q. (Contributor), Ovcharova, A. (Contributor), Owen, M. (Contributor), Owen, R. E. (Contributor), Ozcan, V. E. (Contributor), Ozturk, N. (Contributor), Pachal, K. (Contributor), Pacheco Pages, P. A. (Contributor), Padilla Aranda, A. C. (Contributor), Pagáčová, M. (Contributor), Pagan Griso, G. S. (Contributor), Paganis, E. (Contributor), Paige, F. (Contributor), Pais, P. (Contributor), Pajchel, K. (Contributor), Palacino, G. (Contributor), Palestini, S. (Contributor), Palka, M. (Contributor), Pallin, D. (Contributor), Palma, A. (Contributor), Pan, Y. B. (Contributor), St. Panagiotopoulou, P. E. (Contributor), Pandini, C. E. (Contributor), Panduro Vazquez, V. J. G. (Contributor), Pani, P. (Contributor), Panitkin, S. (Contributor), Pantea, D. (Contributor), Paolozzi, L. (Contributor), Papadopoulou, T. D. (Contributor), Papageorgiou, K. (Contributor), Paramonov, A. (Contributor), Paredes Hernandez, H. D. (Contributor), Parker, M. A. (Contributor), Parker, K. A. (Contributor), Parodi, F. (Contributor), Parsons, J. A. (Contributor), Parzefall, U. (Contributor), Pasqualucci, E. (Contributor), Passaggio, S. (Contributor), Pastore, F. (Contributor), Pastore, F. R. (Contributor), Pásztor, G. (Contributor), Pataraia, S. (Contributor), Patel, N. D. (Contributor), Pater, J. R. (Contributor), Pauly, T. (Contributor), Pearce, J. (Contributor), Pearson, B. (Contributor), Pedersen, L. E. (Contributor), Pedersen, M. (Contributor), Pedraza Lopez, L. S. (Contributor), Pedro, R. (Contributor), Peleganchuk, S. V. (Contributor), Pelikan, D. (Contributor), Penc, O. (Contributor), Peng, C. (Contributor), Peng, H. (Contributor), Penning, B. (Contributor), Penwell, J. (Contributor), Perepelitsa, D. V. (Contributor), Perez Codina, C. E. (Contributor), Pérez García-Estañ, G. M. T. (Contributor), Perini, L. (Contributor), Pernegger, H. (Contributor), Perrella, S. (Contributor), Peschke, R. (Contributor), Peshekhonov, V. D. (Contributor), Peters, K. (Contributor), Peters, R. F. Y. (Contributor), Petersen, B. A. (Contributor), Petersen, T. C. (Contributor), Petit, E. (Contributor), Petridis, A. (Contributor), Petridou, C. (Contributor), Petroff, P. (Contributor), Petrolo, E. (Contributor), Petrucci, F. (Contributor), Pettersson, N. E. (Contributor), Pezoa, R. (Contributor), Phillips, P. W. (Contributor), Piacquadio, G. (Contributor), Pianori, E. (Contributor), Picazio, A. (Contributor), Piccaro, E. (Contributor), Piccinini, M. (Contributor), Pickering, M. A. (Contributor), Piegaia, R. (Contributor), Pignotti, D. T. (Contributor), Pilcher, J. E. (Contributor), Pilkington, A. D. (Contributor), Pin, A. W. J. (Contributor), Pina, J. (Contributor), Pinamonti, M. (Contributor), Pinfold, J. L. (Contributor), Pingel, A. (Contributor), Pires, S. (Contributor), Pirumov, H. (Contributor), Pitt, M. (Contributor), Pizio, C. (Contributor), Plazak, L. (Contributor), Pleier, M. (Contributor), Pleskot, V. (Contributor), Plotnikova, E. (Contributor), Plucinski, P. (Contributor), Pluth, D. (Contributor), Poettgen, R. (Contributor), Poggioli, L. (Contributor), Pohl, D. (Contributor), Polesello, G. (Contributor), Poley, A. (Contributor), Policicchio, A. (Contributor), Polifka, R. (Contributor), Polini, A. (Contributor), Pollard, C. S. (Contributor), Polychronakos, V. (Contributor), Pommès, K. (Contributor), Pontecorvo, L. (Contributor), Pope, B. G. (Contributor), Popeneciu, G. A. (Contributor), Popovic, D. S. (Contributor), Poppleton, A. (Contributor), Pospisil, S. (Contributor), Potamianos, K. (Contributor), Potrap, I. N. (Contributor), Potter, C. J. (Contributor), Potter, C. T. (Contributor), Poulard, G. (Contributor), Poveda, J. (Contributor), Pozdnyakov, V. (Contributor), Pralavorio, P. (Contributor), Pranko, A. (Contributor), Prasad, S. (Contributor), Prell, S. (Contributor), Price, D. (Contributor), Price, L. E. (Contributor), Primavera, M. (Contributor), Prince, S. (Contributor), Proissl, M. (Contributor), Prokofiev, K. (Contributor), Prokoshin, F. (Contributor), Protopapadaki, E. (Contributor), Protopopescu, S. (Contributor), Proudfoot, J. (Contributor), Przybycien, M. (Contributor), Ptacek, E. (Contributor), Puddu, D. (Contributor), Pueschel, E. (Contributor), Puldon, D. (Contributor), Purohit, M. (Contributor), Puzo, P. (Contributor), Qian, J. (Contributor), Qin, G. (Contributor), Qin, Y. (Contributor), Quadt, A. (Contributor), Quarrie, D. R. (Contributor), Quayle, W. B. (Contributor), Queitsch-Maitland, M. (Contributor), Quilty, D. (Contributor), Raddum, S. (Contributor), Radeka, V. (Contributor), Radescu, V. (Contributor), Radhakrishnan, S. K. (Contributor), Radloff, P. (Contributor), Rados, P. (Contributor), Ragusa, F. (Contributor), Rahal, G. (Contributor), Rajagopalan, S. (Contributor), Rammensee, M. (Contributor), Rangel-Smith, C. (Contributor), Rauscher, F. (Contributor), Rave, S. (Contributor), Ravenscroft, T. (Contributor), Raymond, M. (Contributor), Read, A. L. (Contributor), Readioff, N. P. (Contributor), Rebuzzi, D. M. (Contributor), Redelbach, A. (Contributor), Redlinger, G. (Contributor), Reece, R. (Contributor), Reeves, K. (Contributor), Rehnisch, L. (Contributor), Reichert, J. (Contributor), Reisin, H. (Contributor), Relich, M. (Contributor), Rembser, C. (Contributor), Ren, H. (Contributor), Renaud, A. (Contributor), Rescigno, M. (Contributor), Resconi, S. (Contributor), Rezanova, O. L. (Contributor), Reznicek, P. (Contributor), Rezvani, R. (Contributor), Richter, R. (Contributor), Richter, S. (Contributor), Richter-Was, E. (Contributor), Ricken, O. (Contributor), Ridel, M. (Contributor), Rieck, P. (Contributor), Riegel, C. J. (Contributor), Rieger, J. (Contributor), Rifki, O. (Contributor), Rijssenbeek, M. (Contributor), Rimoldi, A. (Contributor), Rinaldi, L. (Contributor), Ristić, B. (Contributor), Ritsch, E. (Contributor), Riu, I. (Contributor), Rizatdinova, F. (Contributor), Rizvi, E. (Contributor), Robertson, S. H. (Contributor), Robichaud-Veronneau, A. (Contributor), Robinson, D. (Contributor), Robinson, J. E. M. (Contributor), Robson, A. (Contributor), Roda, C. (Contributor), Roe, S. (Contributor), Røhne, O. (Contributor), Rolli, S. (Contributor), Romaniouk, A. (Contributor), Romano, M. (Contributor), Romano Saez, S. S. M. (Contributor), Romero Adam, A. E. (Contributor), Rompotis, N. (Contributor), Ronzani, M. (Contributor), Roos, L. (Contributor), Ros, E. (Contributor), Rosati, S. (Contributor), Rosbach, K. (Contributor), Rose, P. (Contributor), Rosendahl, P. L. (Contributor), Rosenthal, O. (Contributor), Rossetti, V. (Contributor), Rossi, E. (Contributor), Rossi, L. P. (Contributor), Rosten, J. H. N. (Contributor), Rosten, R. (Contributor), Rotaru, M. (Contributor), Roth, I. (Contributor), Rothberg, J. (Contributor), Rousseau, D. (Contributor), Royon, C. R. (Contributor), Rozanov, A. (Contributor), Rozen, Y. (Contributor), Ruan, X. (Contributor), Rubbo, F. (Contributor), Rubinskiy, I. (Contributor), Rud, V. I. (Contributor), Rudolph, C. (Contributor), Rudolph, M. S. (Contributor), Rühr, F. (Contributor), Ruiz-Martinez, A. (Contributor), Rurikova, Z. (Contributor), Rusakovich, N. A. (Contributor), Ruschke, A. (Contributor), Russell, H. L. (Contributor), Rutherfoord, J. P. (Contributor), Ruthmann, N. (Contributor), Ryabov, Y. F. (Contributor), Rybar, M. (Contributor), Rybkin, G. (Contributor), Ryder, N. C. (Contributor), Saavedra, A. F. (Contributor), Sabato, G. (Contributor), Sacerdoti, S. (Contributor), Saddique, A. (Contributor), Sadrozinski, H. F. (Contributor), Sadykov, R. (Contributor), Safai Tehrani, T. F. (Contributor), Saha, P. (Contributor), Sahinsoy, M. (Contributor), Saimpert, M. (Contributor), Saito, T. (Contributor), Sakamoto, H. (Contributor), Sakurai, Y. (Contributor), Salamanna, G. (Contributor), Salamon, A. (Contributor), Salazar Loyola, L. J. E. (Contributor), Saleem, M. (Contributor), Salek, D. (Contributor), Sales De Bruin, D. B. P. H. (Contributor), Salihagic, D. (Contributor), Salnikov, A. (Contributor), Salt, J. (Contributor), Salvatore, D. (Contributor), Salvatore, F. (Contributor), Salvucci, A. (Contributor), Salzburger, A. (Contributor), Sammel, D. (Contributor), Sampsonidis, D. (Contributor), Sanchez, A. (Contributor), Sánchez, J. (Contributor), Sanchez Martinez, M. V. (Contributor), Sandaker, H. (Contributor), Sandbach, R. L. (Contributor), Sander, H. G. (Contributor), Sanders, M. P. (Contributor), Sandhoff, M. (Contributor), Sandoval, C. (Contributor), Sandstroem, R. (Contributor), Sankey, D. P. C. (Contributor), Sannino, M. (Contributor), Sansoni, A. (Contributor), Santoni, C. (Contributor), Santonico, R. (Contributor), Santos, H. (Contributor), Santoyo Castillo, C. I. (Contributor), Sapp, K. (Contributor), Sapronov, A. (Contributor), Saraiva, J. G. (Contributor), Sarrazin, B. (Contributor), Sasaki, O. (Contributor), Sasaki, Y. (Contributor), Sato, K. (Contributor), Sauvage, G. (Contributor), Sauvan, E. (Contributor), Savage, G. (Contributor), Savard, P. (Contributor), Sawyer, C. (Contributor), Sawyer, L. (Contributor), Saxon, J. (Contributor), Sbarra, C. (Contributor), Sbrizzi, A. (Contributor), Scanlon, T. (Contributor), Scannicchio, D. A. (Contributor), Scarcella, M. (Contributor), Scarfone, V. (Contributor), Schaarschmidt, J. (Contributor), Schacht, P. (Contributor), Schaefer, D. (Contributor), Schaefer, R. (Contributor), Schaeffer, J. (Contributor), Schaepe, S. (Contributor), Schaetzel, S. (Contributor), Schäfer, U. (Contributor), Schaffer, A. C. (Contributor), Schaile, D. (Contributor), Schamberger, R. D. (Contributor), Scharf, V. (Contributor), Schegelsky, V. A. (Contributor), Scheirich, D. (Contributor), Schernau, M. (Contributor), Schiavi, C. (Contributor), Schillo, C. (Contributor), Schioppa, M. (Contributor), Schlenker, S. (Contributor), Schmieden, K. (Contributor), Schmitt, C. (Contributor), Schmitt, S. (Contributor), Schmitt, S. (Contributor), Schneider, B. (Contributor), Schnellbach, Y. J. (Contributor), Schnoor, U. (Contributor), Schoeffel, L. (Contributor), Schoening, A. (Contributor), Schoenrock, B. D. (Contributor), Schopf, E. (Contributor), Schorlemmer, A. L. S. (Contributor), Schott, M. (Contributor), Schouten, D. (Contributor), Schovancova, J. (Contributor), Schramm, S. (Contributor), Schreyer, M. (Contributor), Schroeder, C. (Contributor), Schuh, N. (Contributor), Schultens, M. J. (Contributor), Schultz-Coulon, H. (Contributor), Schulz, H. (Contributor), Schumacher, M. (Contributor), Schumm, B. A. (Contributor), Schune, P. H. (Contributor), Schwanenberger, C. (Contributor), Schwartzman, A. (Contributor), Schwarz, T. A. (Contributor), Schwegler, P. H. (Contributor), Schweiger, H. (Contributor), Schwemling, P. H. (Contributor), Schwienhorst, R. (Contributor), Schwindling, J. (Contributor), Schwindt, T. (Contributor), Sciacca, F. G. (Contributor), Scifo, E. (Contributor), Sciolla, G. (Contributor), Scuri, F. (Contributor), Scutti, F. (Contributor), Searcy, J. (Contributor), Sedov, G. (Contributor), Sedykh, E. (Contributor), Seema, P. (Contributor), Seidel, S. C. (Contributor), Seiden, A. (Contributor), Seifert, F. (Contributor), Seixas, J. M. (Contributor), Sekhniaidze, G. (Contributor), Sekhon, K. (Contributor), Sekula, S. J. (Contributor), Seliverstov, D. M. (Contributor), Semprini-Cesari, N. (Contributor), Serfon, C. (Contributor), Serin, L. (Contributor), Serkin, L. (Contributor), Serre, T. (Contributor), Sessa, M. (Contributor), Seuster, R. (Contributor), Severini, H. (Contributor), Sfiligoj, T. (Contributor), Sforza, F. (Contributor), Sfyrla, A. (Contributor), Shabalina, E. (Contributor), Shamim, M. (Contributor), Shan, L. Y. (Contributor), Shang, R. (Contributor), Shank, J. T. (Contributor), Shapiro, M. (Contributor), Shatalov, P. B. (Contributor), Shaw, K. (Contributor), Shaw, S. M. (Contributor), Shcherbakova, A. (Contributor), Shehu, C. Y. (Contributor), Sherwood, P. (Contributor), Shi, L. (Contributor), Shimizu, S. (Contributor), Shimmin, C. O. (Contributor), Shimojima, M. (Contributor), Shiyakova, M. (Contributor), Shmeleva, A. (Contributor), Shoaleh Saadi, S. D. (Contributor), Shochet, M. J. (Contributor), Shojaii, S. (Contributor), Shrestha, S. (Contributor), Shulga, E. (Contributor), Shupe, M. A. (Contributor), Shushkevich, S. (Contributor), Sicho, P. (Contributor), Sidebo, P. E. (Contributor), Sidiropoulou, O. (Contributor), Sidorov, D. (Contributor), Sidoti, A. (Contributor), Siegert, F. (Contributor), Sijacki, D. J. (Contributor), Silva, J. (Contributor), Silver, Y. (Contributor), Silverstein, S. B. (Contributor), Simak, V. (Contributor), Simard, O. (Contributor), Simic, L. J. (Contributor), Simion, S. (Contributor), Simioni, E. (Contributor), Simmons, B. (Contributor), Simon, D. (Contributor), Sinervo, P. (Contributor), Sinev, N. B. (Contributor), Sioli, M. (Contributor), Siragusa, G. (Contributor), Sisakyan, A. N. (Contributor), Yu. Sivoklokov, S. S. (Contributor), Sjölin, J. (Contributor), Sjursen, T. B. (Contributor), Skinner, M. B. (Contributor), Skottowe, H. P. (Contributor), Skubic, P. (Contributor), Slater, M. (Contributor), Slavicek, T. (Contributor), Slawinska, M. (Contributor), Sliwa, K. (Contributor), Smakhtin, V. (Contributor), Smart, B. H. (Contributor), Smestad, L. (Contributor), Yu. Smirnov, S. S. (Contributor), Smirnov, Y. (Contributor), Smirnova, L. N. (Contributor), Smirnova, O. (Contributor), Smith, M. N. K. (Contributor), Smith, R. W. (Contributor), Smizanska, M. (Contributor), Smolek, K. (Contributor), Snesarev, A. A. (Contributor), Snidero, G. (Contributor), Snyder, S. (Contributor), Sobie, R. (Contributor), Socher, F. (Contributor), Soffer, A. (Contributor), Soh, D. A. (Contributor), Sokhrannyi, G. (Contributor), Solans Sanchez, S. C. A. (Contributor), Solar, M. (Contributor), Solc, J. (Contributor), Yu. Soldatov, S. E. (Contributor), Soldevila, U. (Contributor), Solodkov, A. A. (Contributor), Soloshenko, A. (Contributor), Solovyanov, O. V. (Contributor), Solovyev, V. (Contributor), Sommer, P. (Contributor), Song, H. Y. (Contributor), Soni, N. (Contributor), Sood, A. (Contributor), Sopczak, A. (Contributor), Sopko, B. (Contributor), Sopko, V. (Contributor), Sorin, V. (Contributor), Sosa, D. (Contributor), Sosebee, M. (Contributor), Sotiropoulou, C. L. (Contributor), Soualah, R. (Contributor), Soukharev, A. M. (Contributor), South, D. (Contributor), Sowden, B. C. (Contributor), Spagnolo, S. (Contributor), Spalla, M. (Contributor), Spangenberg, M. (Contributor), Spanò, F. (Contributor), Spearman, W. R. (Contributor), Sperlich, D. (Contributor), Spettel, F. (Contributor), Spighi, R. (Contributor), Spigo, G. (Contributor), Spiller, L. A. (Contributor), Spousta, M. (Contributor), Spreitzer, T. (Contributor), St. Denis, D. R. D. (Contributor), Stabile, A. (Contributor), Staerz, S. (Contributor), Stahlman, J. (Contributor), Stamen, R. (Contributor), Stamm, S. (Contributor), Stanecka, E. (Contributor), Stanek, R. W. (Contributor), Stanescu, C. (Contributor), Stanescu-Bellu, M. (Contributor), Stanitzki, M. M. (Contributor), Stapnes, S. (Contributor), Starchenko, E. A. (Contributor), Stark, J. (Contributor), Staroba, P. (Contributor), Starovoitov, P. (Contributor), Staszewski, R. (Contributor), Steinberg, P. (Contributor), Stelzer, B. (Contributor), Stelzer, H. J. (Contributor), Stelzer-Chilton, O. (Contributor), Stenzel, H. (Contributor), Stewart, G. A. (Contributor), Stillings, J. A. (Contributor), Stockton, M. C. (Contributor), Stoebe, M. (Contributor), Stoicea, G. (Contributor), Stolte, P. (Contributor), Stonjek, S. (Contributor), Stradling, A. R. (Contributor), Straessner, A. (Contributor), Stramaglia, M. E. (Contributor), Strandberg, J. (Contributor), Strandberg, S. (Contributor), Strandlie, A. (Contributor), Strauss, E. (Contributor), Strauss, M. (Contributor), Strizenec, P. (Contributor), Ströhmer, R. (Contributor), Strom, D. M. (Contributor), Stroynowski, R. (Contributor), Strubig, A. (Contributor), Stucci, S. A. (Contributor), Stugu, B. (Contributor), Styles, N. A. (Contributor), Su, D. (Contributor), Su, J. (Contributor), Subramaniam, R. (Contributor), Succurro, A. (Contributor), Suchek, S. (Contributor), Sugaya, Y. (Contributor), Suk, M. (Contributor), Sulin, V. V. (Contributor), Sultansoy, S. (Contributor), Sumida, T. (Contributor), Sun, S. (Contributor), Sun, X. (Contributor), Sundermann, J. E. (Contributor), Suruliz, K. (Contributor), Susinno, G. (Contributor), Sutton, M. R. (Contributor), Suzuki, S. (Contributor), Svatos, M. (Contributor), Swiatlowski, M. (Contributor), Sykora, I. (Contributor), Sykora, T. (Contributor), Ta, D. (Contributor), Taccini, C. (Contributor), Tackmann, K. (Contributor), Taenzer, J. (Contributor), Taffard, A. (Contributor), Tafirout, R. (Contributor), Taiblum, N. (Contributor), Takai, H. (Contributor), Takashima, R. (Contributor), Takeda, H. (Contributor), Takeshita, T. (Contributor), Takubo, Y. (Contributor), Talby, M. (Contributor), Talyshev, A. A. (Contributor), Tam, J. Y. C. (Contributor), Tan, K. G. (Contributor), Tanaka, J. (Contributor), Tanaka, R. (Contributor), Tanaka, S. (Contributor), Tannenwald, B. B. (Contributor), Tannoury, N. (Contributor), Tapprogge, S. (Contributor), Tarem, S. (Contributor), Tarrade, F. (Contributor), Tartarelli, G. F. (Contributor), Tas, P. (Contributor), Tasevsky, M. (Contributor), Tashiro, T. (Contributor), Tassi, E. (Contributor), Tavares Delgado, D. A. (Contributor), Tayalati, Y. (Contributor), Taylor, F. E. (Contributor), Taylor, G. N. (Contributor), Taylor, P. T. E. (Contributor), Taylor, W. (Contributor), Teischinger, F. A. (Contributor), Teixeira-Dias, P. (Contributor), Temming, K. K. (Contributor), Temple, D. (Contributor), Tenkate, H. (Contributor), Teng, P. K. (Contributor), Teoh, J. J. (Contributor), Tepel, F. (Contributor), Terada, S. (Contributor), Terashi, K. (Contributor), Terron, J. (Contributor), Terzo, S. (Contributor), Testa, M. (Contributor), Teuscher, R. J. (Contributor), Theveneaux-Pelzer, T. (Contributor), Thomas, J. P. (Contributor), Thomas-Wilsker, J. (Contributor), Thompson, E. N. (Contributor), Thompson, P. D. (Contributor), Thompson, R. J. (Contributor), Thompson, A. S. (Contributor), Thomsen, L. A. (Contributor), Thomson, E. (Contributor), Thomson, M. (Contributor), Thun, R. P. (Contributor), Tibbetts, M. J. (Contributor), Ticse Torres, T. R. E. (Contributor), Tikhomirov, V. O. (Contributor), Tikhonov, Y. A. (Contributor), Timoshenko, S. (Contributor), Tiouchichine, E. (Contributor), Tipton, P. (Contributor), Tisserant, S. (Contributor), Todome, K. (Contributor), Todorov, T. (Contributor), Todorova-Nova, S. (Contributor), Tojo, J. (Contributor), Tokár, S. (Contributor), Tokushuku, K. (Contributor), Tollefson, K. (Contributor), Tolley, E. (Contributor), Tomlinson, L. (Contributor), Tomoto, M. (Contributor), Tompkins, L. (Contributor), Toms, K. (Contributor), Torrence, E. (Contributor), Torres, H. (Contributor), Torró Pastor, P. E. (Contributor), Toth, J. (Contributor), Touchard, F. (Contributor), Tovey, D. R. (Contributor), Trefzger, T. (Contributor), Tremblet, L. (Contributor), Tricoli, A. (Contributor), Trigger, I. M. (Contributor), Trincaz-Duvoid, S. (Contributor), Tripiana, M. F. (Contributor), Trischuk, W. (Contributor), Trocmé, B. (Contributor), Troncon, C. (Contributor), Trottier-McDonald, M. (Contributor), Trovatelli, M. (Contributor), True, P. (Contributor), Truong, L. (Contributor), Trzebinski, M. (Contributor), Trzupek, A. (Contributor), Tsarouchas, C. (Contributor), Tseng, J. C. (Contributor), Tsiareshka, P. V. (Contributor), Tsionou, D. (Contributor), Tsipolitis, G. (Contributor), Tsirintanis, N. (Contributor), Tsiskaridze, S. (Contributor), Tsiskaridze, V. (Contributor), Tskhadadze, E. G. (Contributor), Tsukerman, I. I. (Contributor), Tsulaia, V. (Contributor), Tsuno, S. (Contributor), Tsybychev, D. (Contributor), Tudorache, A. (Contributor), Tudorache, V. (Contributor), Tuna, A. N. (Contributor), Tupputi, S. A. (Contributor), Turchikhin, S. (Contributor), Turecek, D. (Contributor), Turra, R. (Contributor), Turvey, A. J. (Contributor), Tuts, P. M. (Contributor), Tykhonov, A. (Contributor), Tylmad, M. (Contributor), Tyndel, M. (Contributor), Ueda, I. (Contributor), Ueno, R. (Contributor), Ughetto, M. (Contributor), Ugland, M. (Contributor), Ukegawa, F. (Contributor), Unal, G. (Contributor), Undrus, A. (Contributor), Unel, G. (Contributor), Ungaro, F. C. (Contributor), Unno, Y. (Contributor), Unverdorben, C. (Contributor), Urban, J. (Contributor), Urquijo, P. (Contributor), Urrejola, P. (Contributor), Usai, G. (Contributor), Usanova, A. (Contributor), Vacavant, L. (Contributor), Vacek, V. (Contributor), Vachon, B. (Contributor), Valderanis, C. (Contributor), Valencic, N. (Contributor), Valentinetti, S. (Contributor), Valero, A. (Contributor), Valery, L. (Contributor), Valkar, S. (Contributor), Valladolid Gallego, G. E. (Contributor), Vallecorsa, S. (Contributor), Valls Ferrer, F. J. A. (Contributor), Van Den Wollenberg, D. W. W. (Contributor), Van Der Deijl, D. D. P. C. (Contributor), Van Der Geer, D. G. R. (Contributor), Van Der Graaf, D. G. H. (Contributor), Van Eldik, E. N. (Contributor), Van Gemmeren, G. P. (Contributor), Van Nieuwkoop, N. J. (Contributor), Van Vulpen, V. I. (Contributor), Van Woerden, W. M. C. (Contributor), Vanadia, M. (Contributor), Vandelli, W. (Contributor), Vanguri, R. (Contributor), Vaniachine, A. (Contributor), Vannucci, F. (Contributor), Vardanyan, G. (Contributor), Vari, R. (Contributor), Varnes, E. W. (Contributor), Varol, T. (Contributor), Varouchas, D. (Contributor), Vartapetian, A. (Contributor), Varvell, K. E. (Contributor), Vazeille, F. (Contributor), Vazquez Schroeder, S. T. (Contributor), Veatch, J. (Contributor), Veloce, L. M. (Contributor), Veloso, F. (Contributor), Velz, T. (Contributor), Veneziano, S. (Contributor), Ventura, A. (Contributor), Ventura, D. (Contributor), Venturi, M. (Contributor), Venturi, N. (Contributor), Venturini, A. (Contributor), Vercesi, V. (Contributor), Verducci, M. (Contributor), Verkerke, W. (Contributor), Vermeulen, J. C. (Contributor), Vest, A. (Contributor), Vetterli, M. C. (Contributor), Viazlo, O. (Contributor), Vichou, I. (Contributor), Vickey, T. (Contributor), Vickey Boeriu, B. O. E. (Contributor), Viehhauser, G. H. A. (Contributor), Viel, S. (Contributor), Vigne, R. (Contributor), Villa, M. (Contributor), Villaplana Perez, P. M. (Contributor), Vilucchi, E. (Contributor), Vincter, M. G. (Contributor), Vinogradov, V. B. (Contributor), Vivarelli, I. (Contributor), Vives Vaque, V. F. (Contributor), Vlachos, S. (Contributor), Vladoiu, D. (Contributor), Vlasak, M. (Contributor), Vogel, M. (Contributor), Vokac, P. (Contributor), Volpi, G. (Contributor), Volpi, M. (Contributor), Von Der Schmitt, D. S. H. (Contributor), Von Radziewski, R. H. (Contributor), Von Toerne, T. E. (Contributor), Vorobel, V. (Contributor), Vorobev, K. (Contributor), Vos, M. (Contributor), Voss, R. (Contributor), Vossebeld, J. H. (Contributor), Vranjes, N. (Contributor), Vranjes Milosavljevic, M. M. (Contributor), Vrba, V. (Contributor), Vreeswijk, M. (Contributor), Vuillermet, R. (Contributor), Vukotic, I. (Contributor), Vykydal, Z. (Contributor), Wagner, P. (Contributor), Wagner, W. (Contributor), Wahlberg, H. (Contributor), Wahrmund, S. (Contributor), Wakabayashi, J. (Contributor), Walder, J. (Contributor), Walker, R. (Contributor), Walkowiak, W. (Contributor), Wang, C. (Contributor), Wang, F. (Contributor), Wang, H. (Contributor), Wang, H. (Contributor), Wang, J. (Contributor), Wang, J. (Contributor), Wang, K. (Contributor), Wang, R. (Contributor), Wang, S. M. (Contributor), Wang, T. (Contributor), Wang, T. (Contributor), Wang, X. (Contributor), Wanotayaroj, C. (Contributor), Warburton, A. (Contributor), Ward, C. P. (Contributor), Wardrope, D. R. (Contributor), Washbrook, A. (Contributor), Wasicki, C. (Contributor), Watkins, P. M. (Contributor), Watson, A. T. (Contributor), Watson, I. J. (Contributor), Watson, M. F. (Contributor), Watts, G. (Contributor), Watts, S. (Contributor), Waugh, B. M. (Contributor), Webb, S. (Contributor), Weber, M. S. (Contributor), Weber, S. W. (Contributor), Webster, J. S. (Contributor), Weidberg, A. R. (Contributor), Weinert, B. (Contributor), Weingarten, J. (Contributor), Weiser, C. (Contributor), Weits, H. (Contributor), Wells, P. S. (Contributor), Wenaus, T. (Contributor), Wengler, T. (Contributor), Wenig, S. (Contributor), Wermes, N. (Contributor), Werner, M. (Contributor), Werner, P. (Contributor), Wessels, M. (Contributor), Wetter, J. (Contributor), Whalen, K. (Contributor), Wharton, A. M. (Contributor), White, A. (Contributor), White, M. J. (Contributor), White, R. (Contributor), White, S. (Contributor), Whiteson, D. (Contributor), Wickens, F. J. (Contributor), Wiedenmann, W. (Contributor), Wielers, M. (Contributor), Wienemann, P. (Contributor), Wiglesworth, C. (Contributor), Wiik-Fuchs, L. A. M. (Contributor), Wildauer, A. (Contributor), Wilkens, H. G. (Contributor), Williams, H. H. (Contributor), Williams, S. (Contributor), Willis, C. (Contributor), Willocq, S. (Contributor), Wilson, A. (Contributor), Wilson, J. A. (Contributor), Wingerter-Seez, I. (Contributor), Winklmeier, F. (Contributor), Winter, B. T. (Contributor), Wittgen, M. (Contributor), Wittkowski, J. (Contributor), Wollstadt, S. J. (Contributor), Wolter, M. W. (Contributor), Wolters, H. (Contributor), Wosiek, B. K. (Contributor), Wotschack, J. (Contributor), Woudstra, M. J. (Contributor), Wozniak, K. W. (Contributor), Wu, M. (Contributor), Wu, M. (Contributor), Wu, S. L. (Contributor), Wu, X. (Contributor), Wu, Y. (Contributor), Wyatt, T. R. (Contributor), Wynne, B. M. (Contributor), Xella, S. (Contributor), Xu, D. (Contributor), Xu, L. (Contributor), Yabsley, B. (Contributor), Yacoob, S. (Contributor), Yakabe, R. (Contributor), Yamada, M. (Contributor), Yamaguchi, D. (Contributor), Yamaguchi, Y. (Contributor), Yamamoto, A. (Contributor), Yamamoto, S. (Contributor), Yamanaka, T. (Contributor), Yamauchi, K. (Contributor), Yamazaki, Y. (Contributor), Yan, Z. (Contributor), Yang, H. (Contributor), Yang, H. (Contributor), Yang, Y. (Contributor), Yao, W. (Contributor), Yasu, Y. (Contributor), Yatsenko, E. (Contributor), Yau Wong, W. K. H. (Contributor), Ye, J. (Contributor), Ye, S. (Contributor), Yeletskikh, I. (Contributor), Yen, A. L. (Contributor), Yildirim, E. (Contributor), Yorita, K. (Contributor), Yoshida, R. (Contributor), Yoshihara, K. (Contributor), Young, C. (Contributor), Young, C. J. S. (Contributor), Youssef, S. (Contributor), Yu, D. R. (Contributor), Yu, J. (Contributor), Yu, J. M. (Contributor), Yu, J. (Contributor), Yuan, L. (Contributor), Yuen, S. P. Y. (Contributor), Yurkewicz, A. (Contributor), Yusuff, I. (Contributor), Zabinski, B. (Contributor), Zaidan, R. (Contributor), Zaitsev, A. M. (Contributor), Zalieckas, J. (Contributor), Zaman, A. (Contributor), Zambito, S. (Contributor), Zanello, L. (Contributor), Zanzi, D. (Contributor), Zeitnitz, C. (Contributor), Zeman, M. (Contributor), Zemla, A. (Contributor), Zeng, Q. (Contributor), Zengel, K. (Contributor), Zenin, O. (Contributor), Ženiš, T. (Contributor), Zerwas, D. (Contributor), Zhang, D. (Contributor), Zhang, F. (Contributor), Zhang, H. (Contributor), Zhang, J. (Contributor), Zhang, L. (Contributor), Zhang, R. (Contributor), Zhang, X. (Contributor), Zhang, Z. (Contributor), Zhao, X. (Contributor), Zhao, Y. (Contributor), Zhao, Z. (Contributor), Zhemchugov, A. (Contributor), Zhong, J. (Contributor), Zhou, B. (Contributor), Zhou, C. (Contributor), Zhou, L. (Contributor), Zhou, L. (Contributor), Zhou, M. (Contributor), Zhou, N. (Contributor), Zhu, C. G. (Contributor), Zhu, H. (Contributor), Zhu, J. (Contributor), Zhu, Y. (Contributor), Zhuang, X. (Contributor), Zhukov, K. (Contributor), Zibell, A. (Contributor), Zieminska, D. (Contributor), Zimine, N. I. (Contributor), Zimmermann, C. (Contributor), Zimmermann, S. (Contributor), Zinonos, Z. (Contributor), Zinser, M. (Contributor), Ziolkowski, M. (Contributor), Živković, L. (Contributor), Zobernig, G. (Contributor), Zoccoli, A. (Contributor), Zur Nedden, N. M. (Contributor), Zurzolo, G. (Contributor) & Collaboration, A. (Creator), HEPData, 2016 DOI: 10.17182/hepdata.72721.v1, https://www.hepdata.net/record/ins1409298?version=1 Z gamma production and limits on anomalous Z Z gamma and Z gamma gamma couplings in panti-p collisions at s**(1/2) = 1.96- TeV Abazov, V. M. (Contributor), Abbott, B. (Contributor), Abolins, M. (Contributor), Acharya, B. S. (Contributor), Adams, M. (Contributor), Adams, T. (Contributor), Aguilo, E. (Contributor), Ahn, S. H. (Contributor), Ahsan, M. (Contributor), Alexeev, G. D. (Contributor), Alkhazov, G. (Contributor), Alton, A. (Contributor), Alverson, G. (Contributor), Alves, G. A. (Contributor), Anastasoaie, M. (Contributor), Ancu, L. S. (Contributor), Andeen, T. (Contributor), Anderson, S. (Contributor), Andrieu, B. (Contributor), Anzelc, M. S. (Contributor), Arnoud, Y. (Contributor), Arov, M. (Contributor), Arthaud, M. (Contributor), Askew, A. (Contributor), Åsman, B. (Contributor), Assis Jesus, J. A. C. S. (Contributor), Atramentov, O. (Contributor), Autermann, C. (Contributor), Avila, C. (Contributor), Ay, C. (Contributor), Badaud, F. (Contributor), Baden, A. (Contributor), Bagby, L. (Contributor), Baldin, B. (Contributor), Bandurin, D. V. (Contributor), Banerjee, P. (Contributor), Banerjee, S. (Contributor), Barberis, E. (Contributor), Barfuss, A. (Contributor), Bargassa, P. (Contributor), Baringer, P. (Contributor), Barreto, J. (Contributor), Bartlett, J. F. (Contributor), Bassler, U. (Contributor), Bauer, D. (Contributor), Beale, S. (Contributor), Bean, A. (Contributor), Begalli, M. (Contributor), Begel, M. (Contributor), Belanger-Champagne, C. (Contributor), Bellantoni, L. (Contributor), Bellavance, A. (Contributor), Benitez, J. A. (Contributor), Beri, S. B. (Contributor), Bernardi, G. (Contributor), Bernhard, R. (Contributor), Berntzon, L. (Contributor), Bertram, I. (Contributor), Besançon, M. (Contributor), Beuselinck, R. (Contributor), Bezzubov, V. A. (Contributor), Bhat, P. C. (Contributor), Bhatnagar, V. (Contributor), Biscarat, C. (Contributor), Blazey, G. (Contributor), Blekman, F. (Contributor), Blessing, S. (Contributor), Bloch, D. (Contributor), Bloom, K. (Contributor), Boehnlein, A. (Contributor), Boline, D. (Contributor), Bolton, T. A. (Contributor), Borissov, G. (Contributor), Bos, K. (Contributor), Bose, T. (Contributor), Brandt, A. (Contributor), Brock, R. (Contributor), Brooijmans, G. (Contributor), Bross, A. (Contributor), Brown, D. (Contributor), Buchanan, N. J. (Contributor), Buchholz, D. (Contributor), Buehler, M. (Contributor), Buescher, V. (Contributor), Burdin, S. (Contributor), Burke, S. (Contributor), Burnett, T. H. (Contributor), Buszello, C. P. (Contributor), Butler, J. M. (Contributor), Calfayan, P. (Contributor), Calvet, S. (Contributor), Cammin, J. (Contributor), Caron, S. (Contributor), Carvalho, W. (Contributor), Casey, B. C. K. (Contributor), Cason, N. M. (Contributor), Castilla-Valdez, H. (Contributor), Chakrabarti, S. (Contributor), Chakraborty, D. (Contributor), Chan, K. (Contributor), Chan, K. M. (Contributor), Chandra, A. (Contributor), Charles, F. (Contributor), Cheu, E. C. (Contributor), Chevallier, F. (Contributor), Cho, D. K. (Contributor), Choi, S. (Contributor), Choudhary, B. (Contributor), Christofek, L. (Contributor), Christoudias, T. (Contributor), Cihangir, S. (Contributor), Claes, D. (Contributor), Clément, B. (Contributor), Clément, C. (Contributor), Coadou, Y. (Contributor), Cooke, M. (Contributor), Cooper, W. E. (Contributor), Corcoran, M. (Contributor), Couderc, F. (Contributor), Cousinou, M. (Contributor), Crépé-Renaudin, S. (Contributor), Cutts, D. (Contributor), Ćwiok, M. (Contributor), da Motta, M. H. (Contributor), Das, A. (Contributor), Davies, G. (Contributor), De, K. (Contributor), de Jong, J. P. (Contributor), de Jong, J. S. J. (Contributor), De La Cruz-Burelo, L. C. E. (Contributor), De Oliveira Martins, O. M. C. (Contributor), Degenhardt, J. D. (Contributor), Déliot, F. (Contributor), Demarteau, M. (Contributor), Demina, R. (Contributor), Denisov, D. (Contributor), Denisov, S. P. (Contributor), Desai, S. (Contributor), Diehl, H. T. (Contributor), Diesburg, M. (Contributor), Dominguez, A. (Contributor), Dong, H. (Contributor), Dudko, L. V. (Contributor), Duflot, L. (Contributor), Dugad, S. R. (Contributor), Duggan, D. (Contributor), Duperrin, A. (Contributor), Dyer, J. (Contributor), Dyshkant, A. (Contributor), Eads, M. (Contributor), Edmunds, D. (Contributor), Ellison, J. (Contributor), Elvira, V. D. (Contributor), Enari, Y. (Contributor), Eno, S. (Contributor), Ermolov, P. (Contributor), Evans, H. (Contributor), Evdokimov, A. (Contributor), Evdokimov, V. N. (Contributor), Ferapontov, A. V. (Contributor), Ferbel, T. (Contributor), Fiedler, F. (Contributor), Filthaut, F. (Contributor), Fisher, W. (Contributor), Fisk, H. E. (Contributor), Ford, M. (Contributor), Fortner, M. (Contributor), Fox, H. (Contributor), Fu, S. (Contributor), Fuess, S. (Contributor), Gadfort, T. (Contributor), Galea, C. F. (Contributor), Gallas, E. (Contributor), Galyaev, E. (Contributor), Garcia, C. (Contributor), Garcia-Bellido, A. (Contributor), Gavrilov, V. (Contributor), Gay, P. (Contributor), Geist, W. (Contributor), Gelé, D. (Contributor), Gerber, C. E. (Contributor), Gershtein, Y. (Contributor), Gillberg, D. (Contributor), Ginther, G. (Contributor), Gollub, N. (Contributor), Gómez, B. (Contributor), Goussiou, A. (Contributor), Grannis, P. D. (Contributor), Greenlee, H. (Contributor), Greenwood, Z. D. (Contributor), Gregores, E. M. (Contributor), Grenier, G. (Contributor), Gris, G. (Contributor), Grivaz, J. (Contributor), Grohsjean, A. (Contributor), Grünendahl, S. (Contributor), Grünewald, M. W. (Contributor), Guo, F. (Contributor), Guo, J. (Contributor), Gutierrez, G. (Contributor), Gutierrez, P. (Contributor), Haas, A. (Contributor), Hadley, N. J. (Contributor), Haefner, P. (Contributor), Hagopian, S. (Contributor), Haley, J. (Contributor), Hall, I. (Contributor), Hall, R. E. (Contributor), Han, L. (Contributor), Hanagaki, K. (Contributor), Hansson, P. (Contributor), Harder, K. (Contributor), Harel, A. (Contributor), Harrington, R. (Contributor), Hauptman, J. M. (Contributor), Hauser, R. (Contributor), Hays, J. (Contributor), Hebbeker, T. (Contributor), Hedin, D. (Contributor), Hegeman, J. G. (Contributor), Heinmiller, J. M. (Contributor), Heinson, A. P. (Contributor), Heintz, U. (Contributor), Hensel, C. (Contributor), Herner, K. (Contributor), Hesketh, G. (Contributor), Hildreth, M. D. (Contributor), Hirosky, R. (Contributor), Hobbs, J. D. (Contributor), Hoeneisen, B. (Contributor), Hoeth, H. (Contributor), Hohlfeld, M. (Contributor), Hong, S. J. (Contributor), Hooper, R. (Contributor), Hossain, S. (Contributor), Houben, P. (Contributor), Hu, Y. (Contributor), Hubacek, Z. (Contributor), Hynek, V. (Contributor), Iashvili, I. (Contributor), Illingworth, R. (Contributor), Ito, A. S. (Contributor), Jabeen, S. (Contributor), Jaffré, M. (Contributor), Jain, S. (Contributor), Jakobs, K. (Contributor), Jarvis, C. (Contributor), Jesik, R. (Contributor), Johns, K. A. (Contributor), Johnson, C. (Contributor), Johnson, M. (Contributor), Jonckheere, A. (Contributor), Jonsson, P. (Contributor), Juste, A. (Contributor), Käfer, D. (Contributor), Kahn, S. (Contributor), Kajfasz, E. (Contributor), Kalinin, A. M. (Contributor), Kalk, J. M. (Contributor), Kalk, J. R. (Contributor), Kappler, S. (Contributor), Karmanov, D. (Contributor), Kasper, J. (Contributor), Kasper, P. (Contributor), Katsanos, I. (Contributor), Kau, D. (Contributor), Kaur, R. (Contributor), Kaushik, V. (Contributor), Kehoe, R. (Contributor), Kermiche, S. (Contributor), Khalatyan, N. (Contributor), Khanov, A. (Contributor), Kharchilava, A. (Contributor), Kharzheev, Y. M. (Contributor), Khatidze, D. (Contributor), Kim, H. (Contributor), Kim, T. J. (Contributor), Kirby, M. H. (Contributor), Kirsch, M. (Contributor), Klima, B. (Contributor), Kohli, J. M. (Contributor), Konrath, J. (Contributor), Kopal, M. (Contributor), Korablev, V. M. (Contributor), Kothari, B. (Contributor), Kozelov, A. V. (Contributor), Krop, D. (Contributor), Kryemadhi, A. (Contributor), Kuhl, T. (Contributor), Kumar, A. (Contributor), Kunori, S. (Contributor), Kupco, A. (Contributor), Kurča, T. (Contributor), Kvita, J. (Contributor), Lam, D. (Contributor), Lammers, S. (Contributor), Landsberg, G. (Contributor), Lazoflores, J. (Contributor), Lebrun, P. (Contributor), Lee, W. M. (Contributor), Leflat, A. (Contributor), Lehner, F. (Contributor), Lellouch, J. (Contributor), Lesne, V. (Contributor), Leveque, J. (Contributor), Lewis, P. (Contributor), Li, J. (Contributor), Li, L. (Contributor), Li, Q. Z. (Contributor), Lietti, S. M. (Contributor), Lima, J. G. R. (Contributor), Lincoln, D. (Contributor), Linnemann, J. (Contributor), Lipaev, V. V. (Contributor), Lipton, R. (Contributor), Liu, Y. (Contributor), Liu, Z. (Contributor), Lobo, L. (Contributor), Lobodenko, A. (Contributor), Lokajicek, M. (Contributor), Lounis, A. (Contributor), Love, P. (Contributor), Lubatti, H. J. (Contributor), Lyon, A. L. (Contributor), Maciel, A. K. A. (Contributor), Mackin, D. (Contributor), Madaras, R. J. (Contributor), Mättig, P. (Contributor), Magass, C. (Contributor), Magerkurth, A. (Contributor), Makovec, N. (Contributor), Mal, P. K. (Contributor), Malbouisson, H. B. (Contributor), Malik, S. (Contributor), Malyshev, V. L. (Contributor), Mao, H. S. (Contributor), Maravin, Y. (Contributor), Martin, B. (Contributor), McCarthy, R. (Contributor), Melnitchouk, A. (Contributor), Mendes, A. (Contributor), Mendoza, L. (Contributor), Mercadante, P. G. (Contributor), Merkin, M. (Contributor), Merritt, K. W. (Contributor), Meyer, A. (Contributor), Meyer, J. (Contributor), Michaut, M. (Contributor), Millet, T. (Contributor), Mitrevski, J. (Contributor), Molina, J. (Contributor), Mommsen, R. K. (Contributor), Mondal, N. K. (Contributor), Moore, R. W. (Contributor), Moulik, T. (Contributor), Muanza, G. S. (Contributor), Mulders, M. (Contributor), Mulhearn, M. (Contributor), Mundal, O. (Contributor), Mundim, L. (Contributor), Nagy, E. (Contributor), Naimuddin, M. (Contributor), Narain, M. (Contributor), Naumann, N. A. (Contributor), Neal, H. A. (Contributor), Negret, J. P. (Contributor), Neustroev, P. (Contributor), Nilsen, H. (Contributor), Noeding, C. (Contributor), Nomerotski, A. (Contributor), Novaes, S. F. (Contributor), Nunnemann, T. (Contributor), O'Dell, V. (Contributor), O'Neil, D. C. (Contributor), Obrant, G. (Contributor), Ochando, C. (Contributor), Onoprienko, D. (Contributor), Oshima, N. (Contributor), Osta, J. (Contributor), Otec, R. (Contributor), Otero y Garzón, Y. G. G. J. (Contributor), Owen, M. (Contributor), Padley, P. (Contributor), Pangilinan, M. (Contributor), Parashar, N. (Contributor), Park, S. (Contributor), Park, S. K. (Contributor), Parsons, J. (Contributor), Partridge, R. (Contributor), Parua, N. (Contributor), Patwa, A. (Contributor), Pawloski, G. (Contributor), Perea, P. M. (Contributor), Peters, K. (Contributor), Peters, Y. (Contributor), Pétroff, P. (Contributor), Petteni, M. (Contributor), Piegaia, R. (Contributor), Piper, J. (Contributor), Pleier, M. (Contributor), Podesta-Lerma, P. L. M. (Contributor), Podstavkov, V. M. (Contributor), Pogorelov, Y. (Contributor), Pol, M. (Contributor), Pompoš, A. (Contributor), Pope, B. G. (Contributor), Popov, A. V. (Contributor), Potter, C. (Contributor), Prado da Silva, D. S. W. L. (Contributor), Prosper, H. B. (Contributor), Protopopescu, S. (Contributor), Qian, J. (Contributor), Quadt, A. (Contributor), Quinn, B. (Contributor), Rakitine, A. (Contributor), Rangel, M. S. (Contributor), Rani, K. J. (Contributor), Ranjan, K. (Contributor), Ratoff, P. N. (Contributor), Renkel, P. (Contributor), Reucroft, S. (Contributor), Rich, P. (Contributor), Rijssenbeek, M. (Contributor), Ripp-Baudot, I. (Contributor), Rizatdinova, F. (Contributor), Robinson, S. (Contributor), Rodrigues, R. F. (Contributor), Royon, C. (Contributor), Rubinov, P. (Contributor), Ruchti, R. (Contributor), Safronov, G. (Contributor), Sajot, G. (Contributor), Sánchez-Hernández, A. (Contributor), Sanders, M. P. (Contributor), Santoro, A. (Contributor), Savage, G. (Contributor), Sawyer, L. (Contributor), Scanlon, T. (Contributor), Schaile, D. (Contributor), Schamberger, R. D. (Contributor), Scheglov, Y. (Contributor), Schellman, H. (Contributor), Schieferdecker, P. (Contributor), Schliephake, T. (Contributor), Schmitt, C. (Contributor), Schwanenberger, C. (Contributor), Schwartzman, A. (Contributor), Schwienhorst, R. (Contributor), Sekaric, J. (Contributor), Sengupta, S. (Contributor), Severini, H. (Contributor), Shabalina, E. (Contributor), Shamim, M. (Contributor), Shary, V. (Contributor), Shchukin, A. A. (Contributor), Shivpuri, R. K. (Contributor), Shpakov, D. (Contributor), Siccardi, V. (Contributor), Simak, V. (Contributor), Sirotenko, V. (Contributor), Skubic, P. (Contributor), Slattery, P. (Contributor), Smirnov, D. (Contributor), Smith, R. P. (Contributor), Snow, G. R. (Contributor), Snow, J. (Contributor), Snyder, S. (Contributor), Söldner-Rembold, S. (Contributor), Sonnenschein, L. (Contributor), Sopczak, A. (Contributor), Sosebee, M. (Contributor), Soustruznik, K. (Contributor), Souza, M. (Contributor), Spurlock, B. (Contributor), Stark, J. (Contributor), Steele, J. (Contributor), Stolin, V. (Contributor), Stone, A. (Contributor), Stoyanova, D. A. (Contributor), Strandberg, J. (Contributor), Strandberg, S. (Contributor), Strang, M. A. (Contributor), Strauss, M. (Contributor), Ströhmer, R. (Contributor), Strom, D. (Contributor), Strovink, M. (Contributor), Stutte, L. (Contributor), Sumowidagdo, S. (Contributor), Svoisky, P. (Contributor), Sznajder, A. (Contributor), Talby, M. (Contributor), Tamburello, P. (Contributor), Tanasijczuk, A. (Contributor), Taylor, W. (Contributor), Telford, P. (Contributor), Temple, J. (Contributor), Tiller, B. (Contributor), Tissandier, F. (Contributor), Titov, M. (Contributor), Tokmenin, V. V. (Contributor), Tomoto, M. (Contributor), Toole, T. (Contributor), Torchiani, I. (Contributor), Trefzger, T. (Contributor), Tsybychev, D. (Contributor), Tuchming, B. (Contributor), Tully, C. (Contributor), Tuts, P. M. (Contributor), Unalan, R. (Contributor), Uvarov, L. (Contributor), Uvarov, S. (Contributor), Uzunyan, S. (Contributor), Vachon, B. (Contributor), van den Berg, D. B. P. J. (Contributor), van Eijk, E. B. (Contributor), Van Kooten, K. R. (Contributor), van Leeuwen, L. W. M. (Contributor), Varelas, N. (Contributor), Varnes, E. W. (Contributor), Vartapetian, A. (Contributor), Vasilyev, I. A. (Contributor), Vaupel, M. (Contributor), Verdier, P. (Contributor), Vertogradov, L. S. (Contributor), Verzocchi, M. (Contributor), Villeneuve-Seguier, F. (Contributor), Vint, P. (Contributor), Von Toerne, T. E. (Contributor), Voutilainen, M. (Contributor), Vreeswijk, M. (Contributor), Wagner, R. (Contributor), Wahl, H. D. (Contributor), Wang, L. (Contributor), S Wang, W. M. H. L. (Contributor), Warchol, J. (Contributor), Watts, G. (Contributor), Wayne, M. (Contributor), Weber, G. (Contributor), Weber, M. (Contributor), Weerts, H. (Contributor), Wenger, A. (Contributor), Wermes, N. (Contributor), Wetstein, M. (Contributor), White, A. (Contributor), Wicke, D. (Contributor), Wilson, G. W. (Contributor), Wimpenny, S. J. (Contributor), Wobisch, M. (Contributor), Wood, D. R. (Contributor), Wyatt, T. R. (Contributor), Xie, Y. (Contributor), Yacoob, S. (Contributor), Yamada, R. (Contributor), Yan, M. (Contributor), Yasuda, T. (Contributor), Yatsunenko, Y. A. (Contributor), Yip, K. (Contributor), Yoo, H. D. (Contributor), Youn, S. W. (Contributor), Yu, C. (Contributor), Yu, J. (Contributor), Yurkewicz, A. (Contributor), Zatserklyaniy, A. (Contributor), Zeitnitz, C. (Contributor), Zhang, D. (Contributor), Zhao, T. (Contributor), Zhou, B. (Contributor), Zhu, J. (Contributor), Zielinski, M. (Contributor), Zieminska, D. (Contributor), Zieminski, A. (Contributor), Zivkovic, L. (Contributor), Zutshi, V. (Contributor) & Zverev, E. G. (Contributor), HEPData, 2009 DOI: 10.17182/hepdata.52512.v1, https://www.hepdata.net/record/ins750351?version=1 Measurement of the Inelastic Proton-Proton Cross-Section at $\sqrt{s}=7$ TeV with the ATLAS Detector Aad, G. (Contributor), Abbott, B. (Contributor), Abdallah, J. (Contributor), Abdelalim, A. A. (Contributor), Abdesselam, A. (Contributor), Abdinov, O. (Contributor), Abi, B. (Contributor), Abolins, M. (Contributor), Abramowicz, H. (Contributor), Abreu, H. (Contributor), Acerbi, E. (Contributor), Acharya, B. S. (Contributor), Adams, D. L. (Contributor), Addy, T. N. (Contributor), Adelman, J. (Contributor), Aderholz, M. (Contributor), Adomeit, S. (Contributor), Adragna, P. (Contributor), Adye, T. (Contributor), Aefsky, S. (Contributor), Aguilar-Saavedra, J. A. (Contributor), Aharrouche, M. (Contributor), Ahlen, S. P. (Contributor), Ahles, F. (Contributor), Ahmad, A. (Contributor), Ahsan, M. (Contributor), Aielli, G. (Contributor), Akdogan, T. (Contributor), Åkesson, T. P. A. (Contributor), Akimoto, G. (Contributor), Akimov, A. V. (Contributor), Akiyama, A. (Contributor), Alam, M. S. (Contributor), Alam, M. A. (Contributor), Albrand, S. (Contributor), Aleksa, M. (Contributor), Aleksandrov, I. N. (Contributor), Alessandria, F. (Contributor), Alexa, C. (Contributor), Alexander, G. (Contributor), Alexandre, G. (Contributor), Alexopoulos, T. (Contributor), Alhroob, M. (Contributor), Aliev, M. (Contributor), Alimonti, G. (Contributor), Alison, J. (Contributor), Aliyev, M. (Contributor), Allport, P. P. (Contributor), Allwood-Spiers, S. E. (Contributor), Almond, J. (Contributor), Aloisio, A. (Contributor), Alon, R. (Contributor), Alonso, A. (Contributor), Alviggi, M. G. (Contributor), Amako, K. (Contributor), Amaral, P. (Contributor), Amelung, C. (Contributor), Ammosov, V. V. (Contributor), Amorim, A. (Contributor), Amorøs, G. (Contributor), Amram, N. (Contributor), Anastopoulos, C. (Contributor), Andeen, T. (Contributor), Anders, C. F. (Contributor), Anderson, K. J. (Contributor), Andreazza, A. (Contributor), Andrei, V. (Contributor), Andrieux, M. (Contributor), Anduaga, X. S. (Contributor), Angerami, A. (Contributor), Anghinolfi, F. (Contributor), Anjos, N. (Contributor), Annovi, A. (Contributor), Antonaki, A. (Contributor), Antonelli, M. (Contributor), Antonelli, S. (Contributor), Antonov, A. (Contributor), Antos, J. (Contributor), Anulli, F. (Contributor), Aoun, S. (Contributor), Aperio Bella, B. L. (Contributor), Apolle, R. (Contributor), Arabidze, G. (Contributor), Aracena, I. (Contributor), Arai, Y. (Contributor), Arce, A. T. H. (Contributor), Archambault, J. P. (Contributor), Arfaoui, S. (Contributor), Arguin, J. (Contributor), Arik, E. (Contributor), Arik, M. (Contributor), Armbruster, A. J. (Contributor), Arnaez, O. (Contributor), Arnault, C. (Contributor), Artamonov, A. (Contributor), Artoni, G. (Contributor), Arutinov, D. (Contributor), Asai, S. (Contributor), Asfandiyarov, R. (Contributor), Ask, S. (Contributor), Åsman, B. (Contributor), Asquith, L. (Contributor), Assamagan, K. (Contributor), Astbury, A. (Contributor), Astvatsatourov, A. (Contributor), Atoian, G. (Contributor), Aubert, B. (Contributor), Auerbach, B. (Contributor), Auge, E. (Contributor), Augsten, K. (Contributor), Aurousseau, M. (Contributor), Austin, N. (Contributor), Avramidou, R. (Contributor), Axen, D. (Contributor), Ay, C. (Contributor), Azuelos, G. (Contributor), Azuma, Y. (Contributor), Baak, M. A. (Contributor), Baccaglioni, G. (Contributor), Bacci, C. (Contributor), Bach, A. M. (Contributor), Bachacou, H. (Contributor), Bachas, K. (Contributor), Bachy, G. (Contributor), Backes, M. (Contributor), Backhaus, M. (Contributor), Badescu, E. (Contributor), Bagnaia, P. (Contributor), Bahinipati, S. (Contributor), Bai, Y. (Contributor), Bailey, D. C. (Contributor), Bain, T. (Contributor), Baines, J. T. (Contributor), Baker, O. K. (Contributor), Baker, M. D. (Contributor), Baker, S. (Contributor), Baltasar Dos Santos Pedrosa, D. S. P. F. (Contributor), Banas, E. (Contributor), Banerjee, P. (Contributor), Banerjee, B. (Contributor), Banfi, D. (Contributor), Bangert, A. (Contributor), Bansal, V. (Contributor), Bansil, H. S. (Contributor), Barak, L. (Contributor), Baranov, S. P. (Contributor), Barashkou, A. (Contributor), Barbaro Galtieri, G. A. (Contributor), Barber, T. (Contributor), Barberio, E. L. (Contributor), Barberis, D. (Contributor), Barbero, M. (Contributor), Bardin, D. Y. (Contributor), Barillari, T. (Contributor), Barisonzi, M. (Contributor), Barklow, T. (Contributor), Barlow, N. (Contributor), Barnett, B. M. (Contributor), Barnett, R. M. (Contributor), Baroncelli, A. (Contributor), Barr, A. J. (Contributor), Barreiro, F. (Contributor), Barreiro Guimarães Da Costa, G. D. C. J. (Contributor), Barrillon, P. (Contributor), Bartoldus, R. (Contributor), Barton, A. E. (Contributor), Bartsch, D. (Contributor), Bartsch, V. (Contributor), Bates, R. L. (Contributor), Batkova, L. (Contributor), Batley, J. R. (Contributor), Battaglia, A. (Contributor), Battistin, M. (Contributor), Battistoni, G. (Contributor), Bauer, F. (Contributor), Bawa, H. S. (Contributor), Beare, B. (Contributor), Beau, T. (Contributor), Beauchemin, P. H. (Contributor), Beccherle, R. (Contributor), Bechtle, P. (Contributor), Beck, H. P. (Contributor), Beckingham, M. (Contributor), Becks, K. H. (Contributor), Beddall, A. J. (Contributor), Beddall, A. (Contributor), Bedikian, S. (Contributor), Bednyakov, V. A. (Contributor), Bee, C. P. (Contributor), Begel, M. (Contributor), Behar Harpaz, H. S. (Contributor), Behera, P. K. (Contributor), Beimforde, M. (Contributor), Belanger-Champagne, C. (Contributor), Bell, P. J. (Contributor), Bell, W. H. (Contributor), Bella, G. (Contributor), Bellagamba, L. (Contributor), Bellina, F. (Contributor), Bellomo, M. (Contributor), Belloni, A. (Contributor), Beloborodova, O. (Contributor), Belotskiy, K. (Contributor), Beltramello, O. (Contributor), Ben Ami, A. S. (Contributor), Benary, O. (Contributor), Benchekroun, D. (Contributor), Benchouk, C. (Contributor), Bendel, M. (Contributor), Benedict, B. H. (Contributor), Benekos, N. (Contributor), Benhammou, Y. (Contributor), Benjamin, D. P. (Contributor), Benoit, M. (Contributor), Bensinger, J. R. (Contributor), Benslama, K. (Contributor), Bentvelsen, S. (Contributor), Berge, D. (Contributor), Bergeaas Kuutmann, K. E. (Contributor), Berger, N. (Contributor), Berghaus, F. (Contributor), Berglund, E. (Contributor), Beringer, J. (Contributor), Bernardet, K. (Contributor), Bernat, P. (Contributor), Bernhard, R. (Contributor), Bernius, C. (Contributor), Berry, T. (Contributor), Bertin, A. (Contributor), Bertinelli, F. (Contributor), Bertolucci, F. (Contributor), Besana, M. I. (Contributor), Besson, N. (Contributor), Bethke, S. (Contributor), Bhimji, W. (Contributor), Bianchi, R. M. (Contributor), Bianco, M. (Contributor), Biebel, O. (Contributor), Bieniek, S. P. (Contributor), Biesiada, J. (Contributor), Biglietti, M. (Contributor), Bilokon, H. (Contributor), Bindi, M. (Contributor), Binet, S. (Contributor), Bingul, A. (Contributor), Bini, C. (Contributor), Biscarat, C. (Contributor), Bitenc, U. (Contributor), Black, K. M. (Contributor), Blair, R. E. (Contributor), Blanchard, J. (Contributor), Blanchot, G. (Contributor), Blazek, T. (Contributor), Blocker, C. (Contributor), Blocki, J. (Contributor), Blondel, A. (Contributor), Blum, W. (Contributor), Blumenschein, U. (Contributor), Bobbink, G. J. (Contributor), Bobrovnikov, V. B. (Contributor), Bocchetta, S. S. (Contributor), Bocci, A. (Contributor), Boddy, C. R. (Contributor), Boehler, M. (Contributor), Boek, J. (Contributor), Boelaert, N. (Contributor), Böser, S. (Contributor), Bogaerts, J. A. (Contributor), Bogdanchikov, A. (Contributor), Bogouch, A. (Contributor), Bohm, C. (Contributor), Boisvert, V. (Contributor), Bold, T. (Contributor), Boldea, V. (Contributor), Bolnet, N. M. (Contributor), Bona, M. (Contributor), Bondarenko, V. G. (Contributor), Boonekamp, M. (Contributor), Boorman, G. (Contributor), Booth, C. N. (Contributor), Booth, P. (Contributor), Bordoni, S. (Contributor), Borer, C. (Contributor), Borisov, A. (Contributor), Borissov, G. (Contributor), Borjanovic, I. (Contributor), Borroni, S. (Contributor), Bos, K. (Contributor), Boscherini, D. (Contributor), Bosman, M. (Contributor), Boterenbrood, H. (Contributor), Botterill, D. (Contributor), Bouchami, J. (Contributor), Boudreau, J. (Contributor), Bouhova-Thacker, E. V. (Contributor), Boulahouache, C. (Contributor), Bourdarios, C. (Contributor), Bousson, N. (Contributor), Boveia, A. (Contributor), Boyd, J. (Contributor), Boyko, I. R. (Contributor), Bozhko, N. I. (Contributor), Bozovic-Jelisavcic, I. (Contributor), Bracinik, J. (Contributor), Braem, A. (Contributor), Branchini, P. (Contributor), Brandenburg, G. W. (Contributor), Brandt, A. (Contributor), Brandt, G. (Contributor), Brandt, O. (Contributor), Bratzler, U. (Contributor), Brau, B. (Contributor), Brau, J. E. (Contributor), Braun, H. M. (Contributor), Brelier, B. (Contributor), Bremer, J. (Contributor), Brenner, R. (Contributor), Bressler, S. (Contributor), Breton, D. (Contributor), Brett, N. D. (Contributor), Britton, D. (Contributor), Brochu, F. M. (Contributor), Brock, I. (Contributor), Brock, R. (Contributor), Brodbeck, T. J. (Contributor), Brodet, E. (Contributor), Broggi, F. (Contributor), Bromberg, C. (Contributor), Brooijmans, G. (Contributor), Brooks, W. K. (Contributor), Brown, G. (Contributor), Brown, H. (Contributor), Brubaker, E. (Contributor), Bruckman De Renstrom, D. R. P. A. (Contributor), Bruncko, D. (Contributor), Bruneliere, R. (Contributor), Brunet, S. (Contributor), Bruni, A. (Contributor), Bruni, G. (Contributor), Bruschi, M. (Contributor), Buanes, T. (Contributor), Bucci, F. (Contributor), Buchanan, J. (Contributor), Buchanan, N. J. (Contributor), Buchholz, P. (Contributor), Buckingham, R. M. (Contributor), Buckley, A. G. (Contributor), Buda, S. I. (Contributor), Budagov, I. A. (Contributor), Budick, B. (Contributor), Büscher, V. (Contributor), Bugge, L. (Contributor), Buira-Clark, D. (Contributor), Buis, E. J. (Contributor), Bulekov, O. (Contributor), Bunse, M. (Contributor), Buran, T. (Contributor), Burckhart, H. (Contributor), Burdin, S. (Contributor), Burgess, T. (Contributor), Burke, S. (Contributor), Busato, E. (Contributor), Bussey, P. (Contributor), Buszello, C. P. (Contributor), Butin, F. (Contributor), Butler, B. (Contributor), Butler, J. M. (Contributor), Buttar, C. M. (Contributor), Butterworth, J. M. (Contributor), Buttinger, W. (Contributor), Byatt, T. (Contributor), Cabrera Urbán, U. S. (Contributor), Caforio, D. (Contributor), Cakir, O. (Contributor), Calafiura, P. (Contributor), Calderini, G. (Contributor), Calfayan, P. (Contributor), Calkins, R. (Contributor), Caloba, L. P. (Contributor), Caloi, R. (Contributor), Calvet, D. (Contributor), Calvet, S. (Contributor), Camacho Toro, T. R. (Contributor), Camard, A. (Contributor), Camarri, P. (Contributor), Cambiaghi, M. (Contributor), Cameron, D. (Contributor), Cammin, J. (Contributor), Campana, S. (Contributor), Campanelli, M. (Contributor), Canale, V. (Contributor), Canelli, F. (Contributor), Canepa, A. (Contributor), Cantero, J. (Contributor), Capasso, L. (Contributor), Capeans Garrido, G. M. D. M. (Contributor), Caprini, I. (Contributor), Caprini, M. (Contributor), Capriotti, D. (Contributor), Capua, M. (Contributor), Caputo, R. (Contributor), Caramarcu, C. (Contributor), Cardarelli, R. (Contributor), Carli, T. (Contributor), Carlino, G. (Contributor), Carminati, L. (Contributor), Caron, B. (Contributor), Caron, S. (Contributor), Carpentieri, C. (Contributor), Carrillo Montoya, M. G. D. (Contributor), Carter, A. A. (Contributor), Carter, J. R. (Contributor), Carvalho, J. (Contributor), Casadei, D. (Contributor), Casado, M. P. (Contributor), Cascella, M. (Contributor), Caso, C. (Contributor), Castaneda Hernandez, H. A. M. (Contributor), Castaneda-Miranda, E. (Contributor), Castillo Gimenez, G. V. (Contributor), Castro, N. F. (Contributor), Cataldi, G. (Contributor), Cataneo, F. (Contributor), Catinaccio, A. (Contributor), Catmore, J. R. (Contributor), Cattai, A. (Contributor), Cattani, G. (Contributor), Caughron, S. (Contributor), Cauz, D. (Contributor), Cavallari, A. (Contributor), Cavalleri, P. (Contributor), Cavalli, D. (Contributor), Cavalli-Sforza, M. (Contributor), Cavasinni, V. (Contributor), Cazzato, A. (Contributor), Ceradini, F. (Contributor), Cerqueira, A. S. (Contributor), Cerri, A. (Contributor), Cerrito, L. (Contributor), Cerutti, F. (Contributor), Cetin, S. A. (Contributor), Cevenini, F. (Contributor), Chafaq, A. (Contributor), Chakraborty, D. (Contributor), Chan, K. (Contributor), Chapleau, B. (Contributor), Chapman, J. D. (Contributor), Chapman, J. W. (Contributor), Chareyre, E. (Contributor), Charlton, D. G. (Contributor), Chavda, V. (Contributor), Cheatham, S. (Contributor), Chekanov, S. (Contributor), Chekulaev, S. V. (Contributor), Chelkov, G. A. (Contributor), Chelstowska, M. A. (Contributor), Chen, C. (Contributor), Chen, H. (Contributor), Chen, L. (Contributor), Chen, S. (Contributor), Chen, T. (Contributor), Chen, X. (Contributor), Cheng, S. (Contributor), Cheplakov, A. (Contributor), Chepurnov, V. F. (Contributor), Cherkaoui El Moursli, E. M. R. (Contributor), Chernyatin, V. (Contributor), Cheu, E. C. (Contributor), Cheung, S. L. (Contributor), Chevalier, L. (Contributor), Chiefari, G. (Contributor), Chikovani, L. (Contributor), Childers, J. T. (Contributor), Chilingarov, A. (Contributor), Chiodini, G. (Contributor), Chizhov, M. V. (Contributor), Choudalakis, G. (Contributor), Chouridou, S. (Contributor), Christidi, I. A. (Contributor), Christov, A. (Contributor), Chromek-Burckhart, D. (Contributor), Chu, M. L. (Contributor), Chudoba, J. (Contributor), Ciapetti, G. (Contributor), Ciba, K. (Contributor), Ciftci, A. K. (Contributor), Ciftci, R. (Contributor), Cinca, D. (Contributor), Cindro, V. (Contributor), Ciobotaru, M. D. (Contributor), Ciocca, C. (Contributor), Ciocio, A. (Contributor), Cirilli, M. (Contributor), Ciubancan, M. (Contributor), Clark, A. (Contributor), Clark, P. J. (Contributor), Cleland, W. (Contributor), Clemens, J. C. (Contributor), Clement, B. (Contributor), Clement, C. (Contributor), Clifft, R. W. (Contributor), Coadou, Y. (Contributor), Cobal, M. (Contributor), Coccaro, A. (Contributor), Cochran, J. (Contributor), Coe, P. (Contributor), Cogan, J. G. (Contributor), Coggeshall, J. (Contributor), Cogneras, E. (Contributor), Cojocaru, C. D. (Contributor), Colas, J. (Contributor), Colijn, A. P. (Contributor), Collard, C. (Contributor), Collins, N. J. (Contributor), Collins-Tooth, C. (Contributor), Collot, J. (Contributor), Colon, G. (Contributor), Comune, G. (Contributor), Conde Muiño, M. P. (Contributor), Coniavitis, E. (Contributor), Conidi, M. C. (Contributor), Consonni, M. (Contributor), Constantinescu, S. (Contributor), Conta, C. (Contributor), Conventi, F. (Contributor), Cook, J. (Contributor), Cooke, M. (Contributor), Cooper, B. D. (Contributor), Cooper-Sarkar, A. M. (Contributor), Cooper-Smith, N. J. (Contributor), Copic, K. (Contributor), Cornelissen, T. (Contributor), Corradi, M. (Contributor), Corriveau, F. (Contributor), Cortes-Gonzalez, A. (Contributor), Cortiana, G. (Contributor), Costa, G. (Contributor), Costa, M. J. (Contributor), Costanzo, D. (Contributor), Costin, T. (Contributor), Côté, D. (Contributor), Coura Torres, T. R. (Contributor), Courneyea, L. (Contributor), Cowan, G. (Contributor), Cowden, C. (Contributor), Cox, B. E. (Contributor), Cranmer, K. (Contributor), Crescioli, F. (Contributor), Cristinziani, M. (Contributor), Crosetti, G. (Contributor), Crupi, R. (Contributor), Crépé-renaudin, S. (Contributor), Cuciuc, C. (Contributor), Cuenca Almenar, A. C. (Contributor), Cuhadar Donszelmann, D. T. (Contributor), Cuneo, S. (Contributor), Curatolo, M. (Contributor), Curtis, C. J. (Contributor), Cwetanski, P. (Contributor), Czirr, H. (Contributor), Czyczula, Z. (Contributor), D'Auria, S. (Contributor), D'Onofrio, M. (Contributor), D'Orazio, A. (Contributor), Da Rocha Gesualdi Mello, R. G. M. A. (Contributor), Da Silva, S. P. V. M. (Contributor), Da Via, V. C. (Contributor), Dabrowski, W. (Contributor), Dahlhoff, A. (Contributor), Dai, T. (Contributor), Dallapiccola, C. (Contributor), Dam, M. (Contributor), Dameri, M. (Contributor), Damiani, D. S. (Contributor), Danielsson, H. O. (Contributor), Dankers, R. (Contributor), Dannheim, D. (Contributor), Dao, V. (Contributor), Darbo, G. (Contributor), Darlea, G. L. (Contributor), Daum, C. (Contributor), Dauvergne, J. P. (Contributor), Davey, W. (Contributor), Davidek, T. (Contributor), Davidson, N. (Contributor), Davidson, R. (Contributor), Davies, M. (Contributor), Davison, A. R. (Contributor), Dawe, E. (Contributor), Dawson, I. (Contributor), Dawson, J. W. (Contributor), Daya, R. K. (Contributor), De, K. (Contributor), De Asmundis, A. R. (Contributor), De Castro, C. S. (Contributor), De Castro Faria Salgado, C. F. S. P. E. (Contributor), De Cecco, C. S. (Contributor), De Graat, G. J. (Contributor), De Groot, G. N. (Contributor), De Jong, J. P. (Contributor), De La Taille, L. T. C. (Contributor), De La Torre, L. T. H. (Contributor), De Lotto, L. B. (Contributor), De Mora, M. L. (Contributor), De Nooij, N. L. (Contributor), De Oliveira Branco, O. B. M. (Contributor), De Pedis, P. D. (Contributor), De Saintignon, S. P. (Contributor), De Salvo, S. A. (Contributor), De Sanctis, S. U. (Contributor), De Santo, S. A. (Contributor), De Vivie De Regie, V. D. R. J. B. (Contributor), Dean, S. (Contributor), Dedovich, D. V. (Contributor), Degenhardt, J. (Contributor), Dehchar, M. (Contributor), Deile, M. (Contributor), Del Papa, P. C. (Contributor), Del Peso, P. J. (Contributor), Del Prete, P. T. (Contributor), Dell'Acqua, A. (Contributor), Dell'Asta, L. (Contributor), Della Pietra, P. M. (Contributor), Della Volpe, V. D. (Contributor), Delmastro, M. (Contributor), Delpierre, P. (Contributor), Delruelle, N. (Contributor), Delsart, P. A. (Contributor), Deluca, C. (Contributor), Demers, S. (Contributor), Demichev, M. (Contributor), Demirkoz, B. (Contributor), Deng, J. (Contributor), Denisov, S. P. (Contributor), Derendarz, D. (Contributor), Derkaoui, J. E. (Contributor), Derue, F. (Contributor), Dervan, P. (Contributor), Desch, K. (Contributor), Devetak, E. (Contributor), Deviveiros, P. O. (Contributor), Dewhurst, A. (Contributor), Dewilde, B. (Contributor), Dhaliwal, S. (Contributor), Dhullipudi, R. (Contributor), Di Ciaccio, C. A. (Contributor), Di Ciaccio, C. L. (Contributor), Di Girolamo, G. A. (Contributor), Di Girolamo, G. B. (Contributor), Di Luise, L. S. (Contributor), Di Mattia, M. A. (Contributor), Di Micco, M. B. (Contributor), Di Nardo, N. R. (Contributor), Di Simone, S. A. (Contributor), Di Sipio, S. R. (Contributor), Diaz, M. A. (Contributor), Diblen, F. (Contributor), Diehl, E. B. (Contributor), Dietl, H. (Contributor), Dietrich, J. (Contributor), Dietzsch, T. A. (Contributor), Diglio, S. (Contributor), Dindar Yagci, Y. K. (Contributor), Dingfelder, J. (Contributor), Dionisi, C. (Contributor), Dita, P. (Contributor), Dita, S. (Contributor), Dittus, F. (Contributor), Djama, F. (Contributor), Djilkibaev, R. (Contributor), Djobava, T. (Contributor), Do Vale, V. M. A. B. (Contributor), Do Valle Wemans, V. W. A. (Contributor), Doan, T. K. O. (Contributor), Dobbs, M. (Contributor), Dobinson, R. (Contributor), Dobos, D. (Contributor), Dobson, E. (Contributor), Dobson, M. (Contributor), Dodd, J. (Contributor), Dogan, O. B. (Contributor), Doglioni, C. (Contributor), Doherty, T. (Contributor), Doi, Y. (Contributor), Dolejsi, J. (Contributor), Dolenc, I. (Contributor), Dolezal, Z. (Contributor), Dolgoshein, B. A. (Contributor), Dohmae, T. (Contributor), Donadelli, M. (Contributor), Donega, M. (Contributor), Donini, J. (Contributor), Dopke, J. (Contributor), Doria, A. (Contributor), Dos Anjos, A. A. (Contributor), Dosil, M. (Contributor), Dotti, A. (Contributor), Dova, M. T. (Contributor), Dowell, J. D. (Contributor), Doxiadis, A. D. (Contributor), Doyle, A. T. (Contributor), Drasal, Z. (Contributor), Drees, J. (Contributor), Dressnandt, N. (Contributor), Drevermann, H. (Contributor), Driouichi, C. (Contributor), Dris, M. (Contributor), Drohan, J. G. (Contributor), Dubbert, J. (Contributor), Dubbs, T. (Contributor), Dube, S. (Contributor), Duchovni, E. (Contributor), Duckeck, G. (Contributor), Dudarev, A. (Contributor), Dudziak, F. (Contributor), Dührssen, M. (Contributor), Duerdoth, I. P. (Contributor), Duflot, L. (Contributor), Dufour, M. (Contributor), Dunford, M. (Contributor), Duran Yildiz, Y. H. (Contributor), Duxfield, R. (Contributor), Dwuznik, M. (Contributor), Dydak, F. (Contributor), Dzahini, D. (Contributor), Düren, M. (Contributor), Ebenstein, W. L. (Contributor), Ebke, J. (Contributor), Eckert, S. (Contributor), Eckweiler, S. (Contributor), Edmonds, K. (Contributor), Edwards, C. A. (Contributor), Ehrenfeld, W. (Contributor), Ehrich, T. (Contributor), Eifert, T. (Contributor), Eigen, G. (Contributor), Einsweiler, K. (Contributor), Eisenhandler, E. (Contributor), Ekelof, T. (Contributor), El Kacimi, K. M. (Contributor), Ellert, M. (Contributor), Elles, S. (Contributor), Ellinghaus, F. (Contributor), Ellis, K. (Contributor), Ellis, N. (Contributor), Elmsheuser, J. (Contributor), Elsing, M. (Contributor), Ely, R. (Contributor), Emeliyanov, D. (Contributor), Engelmann, R. (Contributor), Engl, A. (Contributor), Epp, B. (Contributor), Eppig, A. (Contributor), Erdmann, J. (Contributor), Ereditato, A. (Contributor), Eriksson, D. (Contributor), Ernst, J. (Contributor), Ernst, M. (Contributor), Ernwein, J. (Contributor), Errede, D. (Contributor), Errede, S. (Contributor), Ertel, E. (Contributor), Escalier, M. (Contributor), Escobar, C. (Contributor), Espinal Curull, C. X. (Contributor), Esposito, B. (Contributor), Etienne, F. (Contributor), Etienvre, A. I. (Contributor), Etzion, E. (Contributor), Evangelakou, D. (Contributor), Evans, H. (Contributor), Fabbri, L. (Contributor), Fabre, C. (Contributor), Fakhrutdinov, R. M. (Contributor), Falciano, S. (Contributor), Falou, A. C. (Contributor), Fang, Y. (Contributor), Fanti, M. (Contributor), Farbin, A. (Contributor), Farilla, A. (Contributor), Farley, J. (Contributor), Farooque, T. (Contributor), Farrington, S. M. (Contributor), Farthouat, P. (Contributor), Fasching, D. (Contributor), Fassnacht, P. (Contributor), Fassouliotis, D. (Contributor), Fatholahzadeh, B. (Contributor), Favareto, A. (Contributor), Fayard, L. (Contributor), Fazio, S. (Contributor), Febbraro, R. (Contributor), Federic, P. (Contributor), Fedin, O. L. (Contributor), Fedorko, I. (Contributor), Fedorko, W. (Contributor), Fehling-Kaschek, M. (Contributor), Feligioni, L. (Contributor), Fellmann, D. (Contributor), Felzmann, C. U. (Contributor), Feng, C. (Contributor), Feng, E. J. (Contributor), Fenyuk, A. B. (Contributor), Ferencei, J. (Contributor), Ferland, J. (Contributor), Fernando, W. (Contributor), Ferrag, S. (Contributor), Ferrando, J. (Contributor), Ferrara, V. (Contributor), Ferrari, A. (Contributor), Ferrari, P. (Contributor), Ferrari, R. (Contributor), Ferrer, A. (Contributor), Ferrer, M. L. (Contributor), Ferrere, D. (Contributor), Ferretti, C. (Contributor), Ferretto Parodi, P. A. (Contributor), Fiascaris, M. (Contributor), Fiedler, F. (Contributor), Filipčič, A. (Contributor), Filippas, A. (Contributor), Filthaut, F. (Contributor), Fincke-Keeler, M. (Contributor), Fiolhais, M. C. N. (Contributor), Fiorini, L. (Contributor), Firan, A. (Contributor), Fischer, G. (Contributor), Fischer, P. (Contributor), Fisher, M. J. (Contributor), Fisher, S. M. (Contributor), Flammer, J. (Contributor), Flechl, M. (Contributor), Fleck, I. (Contributor), Fleckner, J. (Contributor), Fleischmann, P. (Contributor), Fleischmann, S. (Contributor), Flick, T. (Contributor), Flores Castillo, C. L. R. (Contributor), Flowerdew, M. J. (Contributor), Föhlisch, F. (Contributor), Fokitis, M. (Contributor), Fonseca Martin, M. T. (Contributor), Forbush, D. A. (Contributor), Formica, A. (Contributor), Forti, A. (Contributor), Fortin, D. (Contributor), Foster, J. M. (Contributor), Fournier, D. (Contributor), Foussat, A. (Contributor), Fowler, A. J. (Contributor), Fowler, K. (Contributor), Fox, H. (Contributor), Francavilla, P. (Contributor), Franchino, S. (Contributor), Francis, D. (Contributor), Frank, T. (Contributor), Franklin, M. (Contributor), Franz, S. (Contributor), Fraternali, M. (Contributor), Fratina, S. (Contributor), French, S. T. (Contributor), Froeschl, R. (Contributor), Froidevaux, D. (Contributor), Frost, J. A. (Contributor), Fukunaga, C. (Contributor), Fullana Torregrosa, T. E. (Contributor), Fuster, J. (Contributor), Gabaldon, C. (Contributor), Gabizon, O. (Contributor), Gadfort, T. (Contributor), Gadomski, S. (Contributor), Gagliardi, G. (Contributor), Gagnon, P. (Contributor), Galea, C. (Contributor), Gallas, E. J. (Contributor), Gallas, M. V. (Contributor), Gallo, V. (Contributor), Gallop, B. J. (Contributor), Gallus, P. (Contributor), Galyaev, E. (Contributor), Gan, K. K. (Contributor), Gao, Y. S. (Contributor), Gapienko, V. A. (Contributor), Gaponenko, A. (Contributor), Garberson, F. (Contributor), Garcia-Sciveres, M. (Contributor), García, C. (Contributor), García Navarro, N. J. E. (Contributor), Gardner, R. W. (Contributor), Garelli, N. (Contributor), Garitaonandia, H. (Contributor), Garonne, V. (Contributor), Garvey, J. (Contributor), Gatti, C. (Contributor), Gaudio, G. (Contributor), Gaumer, O. (Contributor), Gaur, B. (Contributor), Gauthier, L. (Contributor), Gavrilenko, I. L. (Contributor), Gay, C. (Contributor), Gaycken, G. (Contributor), Gayde, J. (Contributor), Gazis, E. N. (Contributor), Ge, P. (Contributor), Gee, C. N. P. (Contributor), Geerts, D. A. A. (Contributor), Geich-Gimbel, G. (Contributor), Gellerstedt, K. (Contributor), Gemme, C. (Contributor), Gemmell, A. (Contributor), Genest, M. H. (Contributor), Gentile, S. (Contributor), George, M. (Contributor), George, S. (Contributor), Gerlach, P. (Contributor), Gershon, A. (Contributor), Geweniger, C. (Contributor), Ghazlane, H. (Contributor), Ghez, P. (Contributor), Ghodbane, N. (Contributor), Giacobbe, B. (Contributor), Giagu, S. (Contributor), Giakoumopoulou, V. (Contributor), Giangiobbe, V. (Contributor), Gianotti, F. (Contributor), Gibbard, B. (Contributor), Gibson, A. (Contributor), Gibson, S. M. (Contributor), Gieraltowski, G. F. (Contributor), Gilbert, L. M. (Contributor), Gilchriese, M. (Contributor), Gilewsky, V. (Contributor), Gillberg, D. (Contributor), Gillman, A. R. (Contributor), Gingrich, D. M. (Contributor), Ginzburg, J. (Contributor), Giokaris, N. (Contributor), Giordano, R. (Contributor), Giorgi, F. M. (Contributor), Giovannini, P. (Contributor), Giraud, P. F. (Contributor), Giugni, D. (Contributor), Giunta, M. (Contributor), Giusti, P. (Contributor), Gjelsten, B. K. (Contributor), Gladilin, L. K. (Contributor), Glasman, C. (Contributor), Glatzer, J. (Contributor), Glazov, A. (Contributor), Glitza, K. W. (Contributor), Glonti, G. L. (Contributor), Godfrey, J. (Contributor), Godlewski, J. (Contributor), Goebel, M. (Contributor), Göpfert, T. (Contributor), Goeringer, C. (Contributor), Gössling, C. (Contributor), Göttfert, T. (Contributor), Goldfarb, S. (Contributor), Goldin, D. (Contributor), Golling, T. (Contributor), Golovnia, S. N. (Contributor), Gomes, A. (Contributor), Gomez Fajardo, F. L. S. (Contributor), Gonçalo, R. (Contributor), Goncalves Pinto Firmino Da Costa, P. F. D. C. J. (Contributor), Gonella, L. (Contributor), Gonidec, A. (Contributor), Gonzalez, S. (Contributor), González De La Hoz, D. L. H. S. (Contributor), Gonzalez Silva, S. M. L. (Contributor), Gonzalez-Sevilla, S. (Contributor), Goodson, J. J. (Contributor), Goossens, L. (Contributor), Gorbounov, P. A. (Contributor), Gordon, H. A. (Contributor), Gorelov, I. (Contributor), Gorfine, G. (Contributor), Gorini, B. (Contributor), Gorini, E. (Contributor), Gorišek, A. (Contributor), Gornicki, E. (Contributor), Gorokhov, S. A. (Contributor), Goryachev, V. N. (Contributor), Gosdzik, B. (Contributor), Gosselink, M. (Contributor), Gostkin, I. (Contributor), Gouanère, M. (Contributor), Gough Eschrich, E. I. (Contributor), Gouighri, M. (Contributor), Goujdami, D. (Contributor), Goulette, M. P. (Contributor), Goussiou, A. G. (Contributor), Goy, C. (Contributor), Grabowska-Bold, I. (Contributor), Grabski, V. (Contributor), Grafström, P. (Contributor), Grah, C. (Contributor), Grahn, K. (Contributor), Grancagnolo, F. (Contributor), Grancagnolo, S. (Contributor), Grassi, V. (Contributor), Gratchev, V. (Contributor), Grau, N. (Contributor), Gray, H. M. (Contributor), Gray, J. A. (Contributor), Graziani, E. (Contributor), Grebenyuk, O. G. (Contributor), Greenfield, D. (Contributor), Greenshaw, T. (Contributor), Greenwood, Z. D. (Contributor), Gregor, I. M. (Contributor), Grenier, P. (Contributor), Griesmayer, E. (Contributor), Griffiths, J. (Contributor), Grigalashvili, N. (Contributor), Grillo, A. A. (Contributor), Grinstein, S. (Contributor), Gris, P. L. Y. (Contributor), Grishkevich, Y. V. (Contributor), Grivaz, J. (Contributor), Grognuz, J. (Contributor), Groh, M. (Contributor), Gross, E. (Contributor), Grosse-Knetter, J. (Contributor), Groth-Jensen, J. (Contributor), Gruwe, M. (Contributor), Grybel, K. (Contributor), Guarino, V. J. (Contributor), Guest, D. (Contributor), Guicheney, C. (Contributor), Guida, A. (Contributor), Guillemin, T. (Contributor), Guindon, S. (Contributor), Guler, H. (Contributor), Gunther, J. (Contributor), Guo, B. (Contributor), Guo, J. (Contributor), Gupta, A. (Contributor), Gusakov, Y. (Contributor), Gushchin, V. N. (Contributor), Gutierrez, A. (Contributor), Gutierrez, P. (Contributor), Guttman, N. (Contributor), Gutzwiller, O. (Contributor), Guyot, C. (Contributor), Gwenlan, C. (Contributor), Gwilliam, C. B. (Contributor), Haas, A. (Contributor), Haas, S. (Contributor), Haber, C. (Contributor), Hackenburg, R. (Contributor), Hadavand, H. K. (Contributor), Hadley, D. R. (Contributor), Haefner, P. (Contributor), Hahn, F. (Contributor), Haider, S. (Contributor), Hajduk, Z. (Contributor), Hakobyan, H. (Contributor), Haller, J. (Contributor), Hamacher, K. (Contributor), Hamal, P. (Contributor), Hamilton, A. (Contributor), Hamilton, S. (Contributor), Han, H. (Contributor), Han, L. (Contributor), Hanagaki, K. (Contributor), Hance, M. (Contributor), Handel, C. (Contributor), Hanke, P. (Contributor), Hansen, C. J. (Contributor), Hansen, J. R. (Contributor), Hansen, J. B. (Contributor), Hansen, J. D. (Contributor), Hansen, P. H. (Contributor), Hansson, P. (Contributor), Hara, K. (Contributor), Hare, G. A. (Contributor), Harenberg, T. (Contributor), Harkusha, S. (Contributor), Harper, D. (Contributor), Harrington, R. D. (Contributor), Harris, O. M. (Contributor), Harrison, K. (Contributor), Hartert, J. (Contributor), Hartjes, F. (Contributor), Haruyama, T. (Contributor), Harvey, A. (Contributor), Hasegawa, S. (Contributor), Hasegawa, Y. (Contributor), Hassani, S. (Contributor), Hatch, M. (Contributor), Hauff, D. (Contributor), Haug, S. (Contributor), Hauschild, M. (Contributor), Hauser, R. (Contributor), Havranek, M. (Contributor), Hawes, B. M. (Contributor), Hawkes, C. M. (Contributor), Hawkings, R. J. (Contributor), Hawkins, D. (Contributor), Hayakawa, T. (Contributor), Hayden, D. (Contributor), Hayward, H. S. (Contributor), Haywood, S. J. (Contributor), Hazen, E. (Contributor), He, M. (Contributor), Head, S. J. (Contributor), Hedberg, V. (Contributor), Heelan, L. (Contributor), Heim, S. (Contributor), Heinemann, B. (Contributor), Heisterkamp, S. (Contributor), Helary, L. (Contributor), Heldmann, M. (Contributor), Heller, M. (Contributor), Hellman, S. (Contributor), Helsens, C. (Contributor), Henderson, R. C. W. (Contributor), Henke, M. (Contributor), Henrichs, A. (Contributor), Henriques Correia, C. A. M. (Contributor), Henrot-Versille, S. (Contributor), Henry-Couannier, F. (Contributor), Hensel, C. (Contributor), Henß, T. (Contributor), Hernandez, C. M. (Contributor), Hernández Jiménez, J. Y. (Contributor), Herrberg, R. (Contributor), Hershenhorn, A. D. (Contributor), Herten, G. (Contributor), Hertenberger, R. (Contributor), Hervas, L. (Contributor), Hessey, N. P. (Contributor), Hidvegi, A. (Contributor), Higøn-Rodriguez, E. (Contributor), Hill, D. (Contributor), Hill, J. C. (Contributor), Hill, N. (Contributor), Hiller, K. H. (Contributor), Hillert, S. (Contributor), Hillier, S. J. (Contributor), Hinchliffe, I. (Contributor), Hines, E. (Contributor), Hirose, M. (Contributor), Hirsch, F. (Contributor), Hirschbuehl, D. (Contributor), Hobbs, J. (Contributor), Hod, N. (Contributor), Hodgkinson, M. C. (Contributor), Hodgson, P. (Contributor), Hoecker, A. (Contributor), Hoeferkamp, M. R. (Contributor), Hoffman, J. (Contributor), Hoffmann, D. (Contributor), Hohlfeld, M. (Contributor), Holder, M. (Contributor), Holmes, A. (Contributor), Holmgren, S. O. (Contributor), Holy, T. (Contributor), Holzbauer, J. L. (Contributor), Homma, Y. (Contributor), Hooft Van Huysduynen, V. H. L. (Contributor), Horazdovsky, T. (Contributor), Horn, C. (Contributor), Horner, S. (Contributor), Horton, K. (Contributor), Hostachy, J. (Contributor), Hou, S. (Contributor), Houlden, M. A. (Contributor), Hoummada, A. (Contributor), Howarth, J. (Contributor), Howell, D. F. (Contributor), Hristova, I. (Contributor), Hrivnac, J. (Contributor), Hruska, I. (Contributor), Hryn'Ova, T. (Contributor), Hsu, P. J. (Contributor), Hsu, S. (Contributor), Huang, G. S. (Contributor), Hubacek, Z. (Contributor), Hubaut, F. (Contributor), Huegging, F. (Contributor), Huffman, T. B. (Contributor), Hughes, E. W. (Contributor), Hughes, G. (Contributor), Hughes-Jones, R. E. (Contributor), Huhtinen, M. (Contributor), Hurst, P. (Contributor), Hurwitz, M. (Contributor), Husemann, U. (Contributor), Huseynov, N. (Contributor), Huston, J. (Contributor), Huth, J. (Contributor), Iacobucci, G. (Contributor), Iakovidis, G. (Contributor), Ibbotson, M. (Contributor), Ibragimov, I. (Contributor), Ichimiya, R. (Contributor), Iconomidou-Fayard, L. (Contributor), Idarraga, J. (Contributor), Idzik, M. (Contributor), Iengo, P. (Contributor), Igonkina, O. (Contributor), Ikegami, Y. (Contributor), Ikeno, M. (Contributor), Ilchenko, Y. (Contributor), Iliadis, D. (Contributor), Imbault, D. (Contributor), Imhaeuser, M. (Contributor), Imori, M. (Contributor), Ince, T. (Contributor), Inigo-Golfin, J. (Contributor), Ioannou, P. (Contributor), Iodice, M. (Contributor), Ionescu, G. (Contributor), Irles Quiles, Q. A. (Contributor), Ishii, K. (Contributor), Ishikawa, A. (Contributor), Ishino, M. (Contributor), Ishmukhametov, R. (Contributor), Issever, C. (Contributor), Istin, S. (Contributor), Itoh, Y. (Contributor), Ivashin, A. V. (Contributor), Iwanski, W. (Contributor), Iwasaki, H. (Contributor), Izen, J. M. (Contributor), Izzo, V. (Contributor), Jackson, B. (Contributor), Jackson, J. N. (Contributor), Jackson, P. (Contributor), Jaekel, M. R. (Contributor), Jain, V. (Contributor), Jakobs, K. (Contributor), Jakobsen, S. (Contributor), Jakubek, J. (Contributor), Jana, D. K. (Contributor), Jankowski, E. (Contributor), Jansen, E. (Contributor), Jantsch, A. (Contributor), Janus, M. (Contributor), Jarlskog, G. (Contributor), Jeanty, L. (Contributor), Jelen, K. (Contributor), Jen-La Plante, P. I. (Contributor), Jenni, P. (Contributor), Jeremie, A. (Contributor), Jež, P. (Contributor), Jézéquel, S. (Contributor), Jha, M. K. (Contributor), Ji, H. (Contributor), Ji, W. (Contributor), Jia, J. (Contributor), Jiang, Y. (Contributor), Jimenez Belenguer, B. M. (Contributor), Jin, G. (Contributor), Jin, S. (Contributor), Jinnouchi, O. (Contributor), Joergensen, M. D. (Contributor), Joffe, D. (Contributor), Johansen, L. G. (Contributor), Johansen, M. (Contributor), Johansson, K. E. (Contributor), Johansson, P. (Contributor), Johnert, S. (Contributor), Johns, K. A. (Contributor), Jon-And, K. (Contributor), Jones, G. (Contributor), Jones, R. W. L. (Contributor), Jones, T. W. (Contributor), Jones, T. J. (Contributor), Jonsson, O. (Contributor), Joram, C. (Contributor), Jorge, P. M. (Contributor), Joseph, J. (Contributor), Ju, X. (Contributor), Juranek, V. (Contributor), Jussel, P. (Contributor), Kabachenko, V. V. (Contributor), Kabana, S. (Contributor), Kaci, M. (Contributor), Kaczmarska, A. (Contributor), Kadlecik, P. (Contributor), Kado, M. (Contributor), Kagan, H. (Contributor), Kagan, M. (Contributor), Kaiser, S. (Contributor), Kajomovitz, E. (Contributor), Kalinin, S. (Contributor), Kalinovskaya, L. V. (Contributor), Kama, S. (Contributor), Kanaya, N. (Contributor), Kaneda, M. (Contributor), Kanno, T. (Contributor), Kantserov, V. A. (Contributor), Kanzaki, J. (Contributor), Kaplan, B. (Contributor), Kapliy, A. (Contributor), Kaplon, J. (Contributor), Kar, D. (Contributor), Karagoz, M. (Contributor), Karnevskiy, M. (Contributor), Karr, K. (Contributor), Kartvelishvili, V. (Contributor), Karyukhin, A. N. (Contributor), Kashif, L. (Contributor), Kasmi, A. (Contributor), Kass, R. D. (Contributor), Kastanas, A. (Contributor), Kataoka, M. (Contributor), Kataoka, Y. (Contributor), Katsoufis, E. (Contributor), Katzy, J. (Contributor), Kaushik, V. (Contributor), Kawagoe, K. (Contributor), Kawamoto, T. (Contributor), Kawamura, G. (Contributor), Kayl, M. S. (Contributor), Kazanin, V. A. (Contributor), Kazarinov, M. Y. (Contributor), Kazi, S. I. (Contributor), Keates, J. R. (Contributor), Keeler, R. (Contributor), Kehoe, R. (Contributor), Keil, M. (Contributor), Kekelidze, G. D. (Contributor), Kelly, M. (Contributor), Kennedy, J. (Contributor), Kenney, C. J. (Contributor), Kenyon, M. (Contributor), Kepka, O. (Contributor), Kerschen, N. (Contributor), Kerševan, B. P. (Contributor), Kersten, S. (Contributor), Kessoku, K. (Contributor), Ketterer, C. (Contributor), Khakzad, M. (Contributor), Khalil-Zada, F. (Contributor), Khandanyan, H. (Contributor), Khanov, A. (Contributor), Kharchenko, D. (Contributor), Khodinov, A. (Contributor), Kholodenko, A. G. (Contributor), Khomich, A. (Contributor), Khoo, T. J. (Contributor), Khoriauli, G. (Contributor), Khoroshilov, A. (Contributor), Khovanskiy, N. (Contributor), Khovanskiy, V. (Contributor), Khramov, E. (Contributor), Khubua, J. (Contributor), Kilvington, G. (Contributor), Kim, H. (Contributor), Kim, M. S. (Contributor), Kim, P. C. (Contributor), Kim, S. H. (Contributor), Kimura, N. (Contributor), Kind, O. (Contributor), King, B. T. (Contributor), King, M. (Contributor), King, R. S. B. (Contributor), Kirk, J. (Contributor), Kirsch, G. P. (Contributor), Kirsch, L. E. (Contributor), Kiryunin, A. E. (Contributor), Kisielewska, D. (Contributor), Kittelmann, T. (Contributor), Kiver, A. M. (Contributor), Kiyamura, H. (Contributor), Kladiva, E. (Contributor), Klaiber-Lodewigs, J. (Contributor), Klein, M. (Contributor), Klein, U. (Contributor), Kleinknecht, K. (Contributor), Klemetti, M. (Contributor), Klier, A. (Contributor), Klimentov, A. (Contributor), Klingenberg, R. (Contributor), Klinkby, E. B. (Contributor), Klioutchnikova, T. (Contributor), Klok, P. F. (Contributor), Klous, S. (Contributor), Kluge, E. (Contributor), Kluge, T. (Contributor), Kluit, P. (Contributor), Kluth, S. (Contributor), Kneringer, E. (Contributor), Knobloch, J. (Contributor), Knoops, E. B. F. G. (Contributor), Knue, A. (Contributor), Ko, B. R. (Contributor), Kobayashi, T. (Contributor), Kobel, M. (Contributor), Koblitz, B. (Contributor), Kocian, M. (Contributor), Kocnar, A. (Contributor), Kodys, P. (Contributor), Köneke, K. (Contributor), König, A. C. (Contributor), Koenig, S. (Contributor), Köpke, L. (Contributor), Koetsveld, F. (Contributor), Koevesarki, P. (Contributor), Koffas, T. (Contributor), Koffeman, E. (Contributor), Kohn, F. (Contributor), Kohout, Z. (Contributor), Kohriki, T. (Contributor), Koi, T. (Contributor), Kokott, T. (Contributor), Kolachev, G. M. (Contributor), Kolanoski, H. (Contributor), Kolesnikov, V. (Contributor), Koletsou, I. (Contributor), Koll, J. (Contributor), Kollar, D. (Contributor), Kollefrath, M. (Contributor), Kolya, S. D. (Contributor), Komar, A. A. (Contributor), Komaragiri, J. R. (Contributor), Kondo, T. (Contributor), Kono, T. (Contributor), Kononov, A. I. (Contributor), Konoplich, R. (Contributor), Konstantinidis, N. (Contributor), Kootz, A. (Contributor), Koperny, S. (Contributor), Kopikov, S. V. (Contributor), Korcyl, K. (Contributor), Kordas, K. (Contributor), Koreshev, V. (Contributor), Korn, A. (Contributor), Korol, A. (Contributor), Korolkov, I. (Contributor), Korolkova, E. V. (Contributor), Korotkov, V. A. (Contributor), Kortner, O. (Contributor), Kortner, S. (Contributor), Kostyukhin, V. V. (Contributor), Kotamäki, M. J. (Contributor), Kotov, S. (Contributor), Kotov, V. M. (Contributor), Kotwal, A. (Contributor), Kourkoumelis, C. (Contributor), Kouskoura, V. (Contributor), Koutsman, A. (Contributor), Kowalewski, R. (Contributor), Kowalski, T. Z. (Contributor), Kozanecki, W. (Contributor), Kozhin, A. S. (Contributor), Kral, V. (Contributor), Kramarenko, V. A. (Contributor), Kramberger, G. (Contributor), Krasel, O. (Contributor), Krasny, M. W. (Contributor), Krasznahorkay, A. (Contributor), Kraus, J. (Contributor), Kreisel, A. (Contributor), Krejci, F. (Contributor), Kretzschmar, J. (Contributor), Krieger, N. (Contributor), Krieger, P. (Contributor), Kroeninger, K. (Contributor), Kroha, H. (Contributor), Kroll, J. (Contributor), Kroseberg, J. (Contributor), Krstic, J. (Contributor), Kruchonak, U. (Contributor), Krüger, H. (Contributor), Krumshteyn, Z. V. (Contributor), Kruth, A. (Contributor), Kubota, T. (Contributor), Kuehn, S. (Contributor), Kugel, A. (Contributor), Kuhl, T. (Contributor), Kuhn, D. (Contributor), Kukhtin, V. (Contributor), Kulchitsky, Y. (Contributor), Kuleshov, S. (Contributor), Kummer, C. (Contributor), Kuna, M. (Contributor), Kundu, N. (Contributor), Kunkle, J. (Contributor), Kupco, A. (Contributor), Kurashige, H. (Contributor), Kurata, M. (Contributor), Kurochkin, Y. A. (Contributor), Kus, V. (Contributor), Kuykendall, W. (Contributor), Kuze, M. (Contributor), Kuzhir, P. (Contributor), Kvasnicka, O. (Contributor), Kvita, J. (Contributor), Kwee, R. (Contributor), La Rosa, R. A. (Contributor), La Rotonda, R. L. (Contributor), Labarga, L. (Contributor), Labbe, J. (Contributor), Lablak, S. (Contributor), Lacasta, C. (Contributor), Lacava, F. (Contributor), Lacker, H. (Contributor), Lacour, D. (Contributor), Lacuesta, V. R. (Contributor), Ladygin, E. (Contributor), Lafaye, R. (Contributor), Laforge, B. (Contributor), Lagouri, T. (Contributor), Lai, S. (Contributor), Laisne, E. (Contributor), Lamanna, M. (Contributor), Lampen, C. L. (Contributor), Lampl, W. (Contributor), Lancon, E. (Contributor), Landgraf, U. (Contributor), Landon, M. P. J. (Contributor), Landsman, H. (Contributor), Lane, J. L. (Contributor), Lange, C. (Contributor), Lankford, A. J. (Contributor), Lanni, F. (Contributor), Lantzsch, K. (Contributor), Lapin, V. V. (Contributor), Laplace, S. (Contributor), Lapoire, C. (Contributor), Laporte, J. F. (Contributor), Lari, T. (Contributor), Larionov, A. V. (Contributor), Larner, A. (Contributor), Lasseur, C. (Contributor), Lassnig, M. (Contributor), Lau, W. (Contributor), Laurelli, P. (Contributor), Lavorato, A. (Contributor), Lavrijsen, W. (Contributor), Laycock, P. (Contributor), Lazarev, A. B. (Contributor), Lazzaro, A. (Contributor), Le Dortz, D. O. (Contributor), Le Guirriec, G. E. (Contributor), Le Maner, M. C. (Contributor), Le Menedeu, M. E. (Contributor), Lebedev, A. (Contributor), Lebel, C. (Contributor), Lecompte, T. (Contributor), Ledroit-Guillon, F. (Contributor), Lee, H. (Contributor), Lee, J. S. H. (Contributor), Lee, S. C. (Contributor), Lee, L. (Contributor), Lefebvre, M. (Contributor), Legendre, M. (Contributor), Leger, A. (Contributor), Legeyt, B. C. (Contributor), Legger, F. (Contributor), Leggett, C. (Contributor), Lehmacher, M. (Contributor), Lehmann Miotto, M. G. (Contributor), Lei, X. (Contributor), Leite, M. A. L. (Contributor), Leitner, R. (Contributor), Lellouch, D. (Contributor), Lellouch, J. (Contributor), Leltchouk, M. (Contributor), Lendermann, V. (Contributor), Leney, K. J. C. (Contributor), Lenz, T. (Contributor), Lenzen, G. (Contributor), Lenzi, B. (Contributor), Leonhardt, K. (Contributor), Leontsinis, S. (Contributor), Leroy, C. (Contributor), Lessard, J. (Contributor), Lesser, J. (Contributor), Lester, C. G. (Contributor), Leung Fook Cheong, F. C. A. (Contributor), Levêque, J. (Contributor), Levin, D. (Contributor), Levinson, L. J. (Contributor), Levitski, M. S. (Contributor), Lewandowska, M. (Contributor), Lewis, G. H. (Contributor), Leyko, A. M. (Contributor), Leyton, M. (Contributor), Li, B. (Contributor), Li, H. (Contributor), Li, S. (Contributor), Li, X. (Contributor), Liang, Z. (Contributor), Liang, Z. (Contributor), Liberti, B. (Contributor), Lichard, P. (Contributor), Lichtnecker, M. (Contributor), Lie, K. (Contributor), Liebig, W. (Contributor), Lifshitz, R. (Contributor), Lilley, J. N. (Contributor), Limbach, C. (Contributor), Limosani, A. (Contributor), Limper, M. (Contributor), Lin, S. C. (Contributor), Linde, F. (Contributor), Linnemann, J. T. (Contributor), Lipeles, E. (Contributor), Lipinsky, L. (Contributor), Lipniacka, A. (Contributor), Liss, T. M. (Contributor), Lissauer, D. (Contributor), Lister, A. (Contributor), Litke, A. M. (Contributor), Liu, C. (Contributor), Liu, D. (Contributor), Liu, H. (Contributor), Liu, J. B. (Contributor), Liu, M. (Contributor), Liu, S. (Contributor), Liu, Y. (Contributor), Livan, M. (Contributor), Livermore, S. S. A. (Contributor), Lleres, A. (Contributor), Llorente Merino, M. J. (Contributor), Lloyd, S. L. (Contributor), Lobodzinska, E. (Contributor), Loch, P. (Contributor), Lockman, W. S. (Contributor), Lockwitz, S. (Contributor), Loddenkoetter, T. (Contributor), Loebinger, F. K. (Contributor), Loginov, A. (Contributor), Loh, C. W. (Contributor), Lohse, T. (Contributor), Lohwasser, K. (Contributor), Lokajicek, M. (Contributor), Loken, J. (Contributor), Lombardo, V. P. (Contributor), Long, R. E. (Contributor), Lopes, L. (Contributor), Lopez Mateos, M. D. (Contributor), Losada, M. (Contributor), Loscutoff, P. (Contributor), Lo Sterzo, S. F. (Contributor), Losty, M. J. (Contributor), Lou, X. (Contributor), Lounis, A. (Contributor), Loureiro, K. F. (Contributor), Love, J. (Contributor), Love, P. A. (Contributor), Lowe, A. J. (Contributor), Lu, F. (Contributor), Lu, L. (Contributor), Lubatti, H. J. (Contributor), Luci, C. (Contributor), Lucotte, A. (Contributor), Ludwig, A. (Contributor), Ludwig, D. (Contributor), Ludwig, I. (Contributor), Ludwig, J. (Contributor), Luehring, F. (Contributor), Luijckx, G. (Contributor), Lumb, D. (Contributor), Luminari, L. (Contributor), Lund, E. (Contributor), Lund-Jensen, B. (Contributor), Lundberg, B. (Contributor), Lundberg, J. (Contributor), Lundquist, J. (Contributor), Lungwitz, M. (Contributor), Lupi, A. (Contributor), Lutz, G. (Contributor), Lynn, D. (Contributor), Lys, J. (Contributor), Lytken, E. (Contributor), Ma, H. (Contributor), Ma, L. L. (Contributor), Macana Goia, G. J. A. (Contributor), Maccarrone, G. (Contributor), Macchiolo, A. (Contributor), Maček, B. (Contributor), Machado Miguens, M. J. (Contributor), Macina, D. (Contributor), Mackeprang, R. (Contributor), Madaras, R. J. (Contributor), Mader, W. F. (Contributor), Maenner, R. (Contributor), Maeno, T. (Contributor), Mättig, P. (Contributor), Mättig, S. (Contributor), Magalhaes Martins, M. P. J. (Contributor), Magnoni, L. (Contributor), Magradze, E. (Contributor), Mahalalel, Y. (Contributor), Mahboubi, K. (Contributor), Mahout, G. (Contributor), Maiani, C. (Contributor), Maidantchik, C. (Contributor), Maio, A. (Contributor), Majewski, S. (Contributor), Makida, Y. (Contributor), Makovec, N. (Contributor), Mal, P. (Contributor), Malecki, M. (Contributor), Malecki, P. (Contributor), Maleev, V. P. (Contributor), Malek, F. (Contributor), Mallik, U. (Contributor), Malon, D. (Contributor), Maltezos, S. (Contributor), Malyshev, V. (Contributor), Malyukov, S. (Contributor), Mameghani, R. (Contributor), Mamuzic, J. (Contributor), Manabe, A. (Contributor), Mandelli, L. (Contributor), Mandić, I. (Contributor), Mandrysch, R. (Contributor), Maneira, J. (Contributor), Mangeard, P. S. (Contributor), Manjavidze, I. D. (Contributor), Mann, A. (Contributor), Manning, P. M. (Contributor), Manousakis-Katsikakis, A. (Contributor), Mansoulie, B. (Contributor), Manz, A. (Contributor), Mapelli, A. (Contributor), Mapelli, L. (Contributor), March, L. (Contributor), Marchand, J. F. (Contributor), Marchese, F. (Contributor), Marchiori, G. (Contributor), Marcisovsky, M. (Contributor), Marin, A. (Contributor), Marino, C. P. (Contributor), Marroquim, F. (Contributor), Marshall, R. (Contributor), Marshall, Z. (Contributor), Martens, F. K. (Contributor), Marti-Garcia, S. (Contributor), Martin, A. J. (Contributor), Martin, B. (Contributor), Martin, B. (Contributor), Martin, F. F. (Contributor), Martin, J. P. (Contributor), Martin, M. (Contributor), Martin, T. A. (Contributor), Martin Dit Latour, D. L. B. (Contributor), Martinez, M. (Contributor), Martinez Outschoorn, O. V. (Contributor), Martyniuk, A. C. (Contributor), Marx, M. (Contributor), Marzano, F. (Contributor), Marzin, A. (Contributor), Masetti, L. (Contributor), Mashimo, T. (Contributor), Mashinistov, R. (Contributor), Masik, J. (Contributor), Maslennikov, A. L. (Contributor), Maß, M. (Contributor), Massa, I. (Contributor), Massaro, G. (Contributor), Massol, N. (Contributor), Mastroberardino, A. (Contributor), Masubuchi, T. (Contributor), Mathes, M. (Contributor), Matricon, P. (Contributor), Matsumoto, H. (Contributor), Matsunaga, H. (Contributor), Matsushita, T. (Contributor), Mattravers, C. (Contributor), Maugain, J. M. (Contributor), Maxfield, S. J. (Contributor), Maximov, D. A. (Contributor), May, E. N. (Contributor), Mayne, A. (Contributor), Mazini, R. (Contributor), Mazur, M. (Contributor), Mazzanti, M. (Contributor), Mazzoni, E. (Contributor), McKee, S. P. (Contributor), Mccarn, A. (Contributor), Mccarthy, R. L. (Contributor), Mccarthy, T. G. (Contributor), Mccubbin, N. A. (Contributor), Mcfarlane, K. W. (Contributor), Mcfayden, J. A. (Contributor), Mcglone, H. (Contributor), Mchedlidze, G. (Contributor), Mclaren, R. A. (Contributor), Mclaughlan, T. (Contributor), Mcmahon, S. J. (Contributor), Mcpherson, R. A. (Contributor), Meade, A. (Contributor), Mechnich, J. (Contributor), Mechtel, M. (Contributor), Medinnis, M. (Contributor), Meera-Lebbai, R. (Contributor), Meguro, T. (Contributor), Mehdiyev, R. (Contributor), Mehlhase, S. (Contributor), Mehta, A. (Contributor), Meier, K. (Contributor), Meinhardt, J. (Contributor), Meirose, B. (Contributor), Melachrinos, C. (Contributor), Mellado Garcia, G. B. R. (Contributor), Mendoza Navas, N. L. (Contributor), Meng, Z. (Contributor), Mengarelli, A. (Contributor), Menke, S. (Contributor), Menot, C. (Contributor), Meoni, E. (Contributor), Mercurio, K. M. (Contributor), Mermod, P. (Contributor), Merola, L. (Contributor), Meroni, C. (Contributor), Merritt, F. S. (Contributor), Messina, A. (Contributor), Metcalfe, J. (Contributor), Mete, A. S. (Contributor), Meuser, S. (Contributor), Meyer, C. (Contributor), Meyer, J. (Contributor), Meyer, J. (Contributor), Meyer, J. (Contributor), Meyer, T. C. (Contributor), Meyer, W. T. (Contributor), Miao, J. (Contributor), Michal, S. (Contributor), Micu, L. (Contributor), Middleton, R. P. (Contributor), Miele, P. (Contributor), Migas, S. (Contributor), Mijovič, L. (Contributor), Mikenberg, G. (Contributor), Mikestikova, M. (Contributor), Mikulec, B. (Contributor), Mikuž, M. (Contributor), Miller, D. W. (Contributor), Miller, R. J. (Contributor), Mills, W. J. (Contributor), Mills, C. (Contributor), Milov, A. (Contributor), Milstead, D. A. (Contributor), Milstein, D. (Contributor), Minaenko, A. A. (Contributor), Miñano, M. (Contributor), Minashvili, I. A. (Contributor), Mincer, A. I. (Contributor), Mindur, B. (Contributor), Mineev, M. (Contributor), Ming, Y. (Contributor), Mir, L. M. (Contributor), Mirabelli, G. (Contributor), Miralles Verge, V. L. (Contributor), Misiejuk, A. (Contributor), Mitrevski, J. (Contributor), Mitrofanov, G. Y. (Contributor), Mitsou, V. A. (Contributor), Mitsui, S. (Contributor), Miyagawa, P. S. (Contributor), Miyazaki, K. (Contributor), Mjörnmark, J. U. (Contributor), Moa, T. (Contributor), Mockett, P. (Contributor), Moed, S. (Contributor), Moeller, V. (Contributor), Mönig, K. (Contributor), Möser, N. (Contributor), Mohapatra, S. (Contributor), Mohn, B. (Contributor), Mohr, W. (Contributor), Mohrdieck-Möck, S. (Contributor), Moisseev, A. M. (Contributor), Moles-Valls, R. (Contributor), Molina-Perez, J. (Contributor), Moneta, L. (Contributor), Monk, J. (Contributor), Monnier, E. (Contributor), Montesano, S. (Contributor), Monticelli, F. (Contributor), Monzani, S. (Contributor), Moore, R. W. (Contributor), Moorhead, G. F. (Contributor), Mora Herrera, H. C. (Contributor), Moraes, A. (Contributor), Morais, A. (Contributor), Morange, N. (Contributor), Morello, G. (Contributor), Moreno, D. (Contributor), Moreno Llácer, L. M. (Contributor), Morettini, P. (Contributor), Morii, M. (Contributor), Morin, J. (Contributor), Morita, Y. (Contributor), Morley, A. K. (Contributor), Mornacchi, G. (Contributor), Morone, M. (Contributor), Morozov, S. V. (Contributor), Morris, J. D. (Contributor), Moser, H. G. (Contributor), Mosidze, M. (Contributor), Moss, J. (Contributor), Mount, R. (Contributor), Mountricha, E. (Contributor), Mouraviev, S. V. (Contributor), Moyse, E. J. W. (Contributor), Mudrinic, M. (Contributor), Mueller, F. (Contributor), Mueller, J. (Contributor), Mueller, K. (Contributor), Müller, T. A. (Contributor), Muenstermann, D. (Contributor), Muijs, A. (Contributor), Muir, A. (Contributor), Munwes, Y. (Contributor), Murakami, K. (Contributor), Murray, W. J. (Contributor), Mussche, I. (Contributor), Musto, E. (Contributor), Myagkov, A. G. (Contributor), Myska, M. (Contributor), Nadal, J. (Contributor), Nagai, K. (Contributor), Nagano, K. (Contributor), Nagasaka, Y. (Contributor), Nairz, A. M. (Contributor), Nakahama, Y. (Contributor), Nakamura, K. (Contributor), Nakano, I. (Contributor), Nanava, G. (Contributor), Napier, A. (Contributor), Nash, M. (Contributor), Nation, N. R. (Contributor), Nattermann, T. (Contributor), Naumann, T. (Contributor), Navarro, G. (Contributor), Neal, H. A. (Contributor), Nebot, E. (Contributor), Nechaeva, N. (Contributor), Negri, A. (Contributor), Negri, G. (Contributor), Nektarijevic, S. (Contributor), Nelson, A. (Contributor), Nelson, S. (Contributor), Nelson, T. K. (Contributor), Nemecek, S. (Contributor), Nemethy, P. (Contributor), Nepomuceno, A. A. (Contributor), Nessi, M. (Contributor), Nesterov, S. Y. (Contributor), Neubauer, M. S. (Contributor), Neusiedl, A. (Contributor), Neves, R. M. (Contributor), Nevski, P. (Contributor), Newman, P. R. (Contributor), Nickerson, R. B. (Contributor), Nicolaidou, R. (Contributor), Nicolas, L. (Contributor), Nicquevert, B. (Contributor), Niedercorn, F. (Contributor), Nielsen, J. (Contributor), Niinikoski, T. (Contributor), Nikiforov, A. (Contributor), Nikolaenko, V. (Contributor), Nikolaev, K. (Contributor), Nikolic-Audit, I. (Contributor), Nikolopoulos, K. (Contributor), Nilsen, H. (Contributor), Nilsson, P. (Contributor), Ninomiya, Y. (Contributor), Nisati, A. (Contributor), Nishiyama, T. (Contributor), Nisius, R. (Contributor), Nodulman, L. (Contributor), Nomachi, M. (Contributor), Nomidis, I. (Contributor), Nomoto, H. (Contributor), Nordberg, M. (Contributor), Nordkvist, B. (Contributor), Norton, P. R. (Contributor), Novakova, J. (Contributor), Nozaki, M. (Contributor), Nožička, M. (Contributor), Nozka, L. (Contributor), Nugent, I. M. (Contributor), Nuncio-Quiroz, A. (Contributor), Nunes Hanninger, H. G. (Contributor), Nunnemann, T. (Contributor), Nurse, E. (Contributor), Nyman, T. (Contributor), O'Brien, B. J. (Contributor), O'Neale, S. W. (Contributor), O'Neil, D. C. (Contributor), O'Shea, V. (Contributor), Oakham, F. G. (Contributor), Oberlack, H. (Contributor), Ocariz, J. (Contributor), Ochi, A. (Contributor), Oda, S. (Contributor), Odaka, S. (Contributor), Odier, J. (Contributor), Ogren, H. (Contributor), Oh, A. (Contributor), Oh, S. H. (Contributor), Ohm, C. C. (Contributor), Ohshima, T. (Contributor), Ohshita, H. (Contributor), Ohska, T. K. (Contributor), Ohsugi, T. (Contributor), Okada, S. (Contributor), Okawa, H. (Contributor), Okumura, Y. (Contributor), Okuyama, T. (Contributor), Olcese, M. (Contributor), Olchevski, A. G. (Contributor), Oliveira, M. (Contributor), Oliveira Damazio, D. D. (Contributor), Oliver Garcia, G. E. (Contributor), Olivito, D. (Contributor), Olszewski, A. (Contributor), Olszowska, J. (Contributor), Omachi, C. (Contributor), Onofre, A. (Contributor), Onyisi, P. U. E. (Contributor), Oram, C. J. (Contributor), Oreglia, M. J. (Contributor), Orellana, F. (Contributor), Oren, Y. (Contributor), Orestano, D. (Contributor), Orlov, I. (Contributor), Oropeza Barrera, B. C. (Contributor), Orr, R. S. (Contributor), Ortega, E. O. (Contributor), Osculati, B. (Contributor), Ospanov, R. (Contributor), Osuna, C. (Contributor), OteroY Garzon, G. G. (Contributor), Ottersbach, J. P. (Contributor), Ouchrif, M. (Contributor), Ould-Saada, F. (Contributor), Ouraou, A. (Contributor), Ouyang, Q. (Contributor), Owen, M. (Contributor), Owen, S. (Contributor), Øye, O. K. (Contributor), Ozcan, V. E. (Contributor), Ozturk, N. (Contributor), Pacheco Pages, P. A. (Contributor), Padilla Aranda, A. C. (Contributor), Paganis, E. (Contributor), Paige, F. (Contributor), Pajchel, K. (Contributor), Palestini, S. (Contributor), Pallin, D. (Contributor), Palma, A. (Contributor), Palmer, J. D. (Contributor), Pan, Y. B. (Contributor), Panagiotopoulou, E. (Contributor), Panes, B. (Contributor), Panikashvili, N. (Contributor), Panitkin, S. (Contributor), Pantea, D. (Contributor), Panuskova, M. (Contributor), Paolone, V. (Contributor), Paoloni, A. (Contributor), Papadelis, A. (Contributor), Papadopoulou, P. (Contributor), Paramonov, A. (Contributor), Park, W. (Contributor), Parker, M. A. (Contributor), Parodi, F. (Contributor), Parsons, J. A. (Contributor), Parzefall, U. (Contributor), Pasqualucci, E. (Contributor), Passeri, A. (Contributor), Pastore, F. (Contributor), Pastore, P. (Contributor), Pásztor, G. (Contributor), Pataraia, S. (Contributor), Patel, N. (Contributor), Pater, J. R. (Contributor), Patricelli, S. (Contributor), Pauly, T. (Contributor), Pecsy, M. (Contributor), Pedraza Morales, M. M. I. (Contributor), Peleganchuk, S. V. (Contributor), Peng, H. (Contributor), Pengo, R. (Contributor), Penson, A. (Contributor), Penwell, J. (Contributor), Perantoni, M. (Contributor), Perez, K. (Contributor), Perez Cavalcanti, C. T. (Contributor), Perez Codina, C. E. (Contributor), Pérez García-Estañ, G. M. T. (Contributor), Perez Reale, R. V. (Contributor), Peric, I. (Contributor), Perini, L. (Contributor), Pernegger, H. (Contributor), Perrino, R. (Contributor), Perrodo, P. (Contributor), Persembe, S. (Contributor), Peshekhonov, V. D. (Contributor), Peters, O. (Contributor), Petersen, B. A. (Contributor), Petersen, J. (Contributor), Petersen, T. C. (Contributor), Petit, E. (Contributor), Petridis, A. (Contributor), Petridou, C. (Contributor), Petrolo, E. (Contributor), Petrucci, F. (Contributor), Petschull, D. (Contributor), Petteni, M. (Contributor), Pezoa, R. (Contributor), Phan, A. (Contributor), Phillips, A. W. (Contributor), Phillips, P. W. (Contributor), Piacquadio, G. (Contributor), Piccaro, E. (Contributor), Piccinini, M. (Contributor), Pickford, A. (Contributor), Piec, S. M. (Contributor), Piegaia, R. (Contributor), Pilcher, J. E. (Contributor), Pilkington, A. D. (Contributor), Pina, J. (Contributor), Pinamonti, M. (Contributor), Pinder, A. (Contributor), Pinfold, J. L. (Contributor), Ping, J. (Contributor), Pinto, B. (Contributor), Pirotte, O. (Contributor), Pizio, C. (Contributor), Placakyte, R. (Contributor), Plamondon, M. (Contributor), Plano, W. G. (Contributor), Pleier, M. (Contributor), Pleskach, A. V. (Contributor), Poblaguev, A. (Contributor), Poddar, S. (Contributor), Podlyski, F. (Contributor), Poggioli, L. (Contributor), Poghosyan, T. (Contributor), Pohl, M. (Contributor), Polci, F. (Contributor), Polesello, G. (Contributor), Policicchio, A. (Contributor), Polini, A. (Contributor), Poll, J. (Contributor), Polychronakos, V. (Contributor), Pomarede, D. M. (Contributor), Pomeroy, D. (Contributor), Pommès, K. (Contributor), Pontecorvo, L. (Contributor), Pope, B. G. (Contributor), Popeneciu, G. A. (Contributor), Popovic, D. S. (Contributor), Poppleton, A. (Contributor), Portell Bueso, B. X. (Contributor), Porter, R. (Contributor), Posch, C. (Contributor), Pospelov, G. E. (Contributor), Pospisil, S. (Contributor), Potrap, I. N. (Contributor), Potter, C. J. (Contributor), Potter, C. T. (Contributor), Poulard, G. (Contributor), Poveda, J. (Contributor), Prabhu, R. (Contributor), Pralavorio, P. (Contributor), Prasad, S. (Contributor), Pravahan, R. (Contributor), Prell, S. (Contributor), Pretzl, K. (Contributor), Pribyl, L. (Contributor), Price, D. (Contributor), Price, L. E. (Contributor), Price, M. J. (Contributor), Prichard, P. M. (Contributor), Prieur, D. (Contributor), Primavera, M. (Contributor), Prokofiev, K. (Contributor), Prokoshin, F. (Contributor), Protopopescu, S. (Contributor), Proudfoot, J. (Contributor), Prudent, X. (Contributor), Przysiezniak, H. (Contributor), Psoroulas, S. (Contributor), Ptacek, E. (Contributor), Purdham, J. (Contributor), Purohit, M. (Contributor), Puzo, P. (Contributor), Pylypchenko, Y. (Contributor), Qian, J. (Contributor), Qian, Z. (Contributor), Qin, Z. (Contributor), Quadt, A. (Contributor), Quarrie, D. R. (Contributor), Quayle, W. B. (Contributor), Quinonez, F. (Contributor), Raas, M. (Contributor), Radescu, V. (Contributor), Radics, B. (Contributor), Rador, T. (Contributor), Ragusa, F. (Contributor), Rahal, G. (Contributor), Rahimi, A. M. (Contributor), Rahm, D. (Contributor), Rajagopalan, S. (Contributor), Rammensee, M. (Contributor), Rammes, M. (Contributor), Ramstedt, M. (Contributor), Randrianarivony, K. (Contributor), Ratoff, P. N. (Contributor), Rauscher, F. (Contributor), Rauter, E. (Contributor), Raymond, M. (Contributor), Read, A. L. (Contributor), Rebuzzi, D. M. (Contributor), Redelbach, A. (Contributor), Redlinger, G. (Contributor), Reece, R. (Contributor), Reeves, K. (Contributor), Reichold, A. (Contributor), Reinherz-Aronis, E. (Contributor), Reinsch, A. (Contributor), Reisinger, I. (Contributor), Reljic, D. (Contributor), Rembser, C. (Contributor), Ren, Z. L. (Contributor), Renaud, A. (Contributor), Renkel, P. (Contributor), Rensch, B. (Contributor), Rescigno, M. (Contributor), Resconi, S. (Contributor), Resende, B. (Contributor), Reznicek, P. (Contributor), Rezvani, R. (Contributor), Richards, A. (Contributor), Richter, R. (Contributor), Richter-Was, E. (Contributor), Ridel, M. (Contributor), Rieke, S. (Contributor), Rijpstra, M. (Contributor), Rijssenbeek, M. (Contributor), Rimoldi, A. (Contributor), Rinaldi, L. (Contributor), Rios, R. R. (Contributor), Riu, I. (Contributor), Rivoltella, G. (Contributor), Rizatdinova, F. (Contributor), Rizvi, E. (Contributor), Robertson, S. H. (Contributor), Robichaud-Veronneau, A. (Contributor), Robinson, D. (Contributor), Robinson, J. E. M. (Contributor), Robinson, M. (Contributor), Robson, A. (Contributor), Rocha De Lima, D. L. J. G. (Contributor), Roda, C. (Contributor), Roda Dos Santos, D. S. D. (Contributor), Rodier, S. (Contributor), Rodriguez, D. (Contributor), Rodriguez Garcia, G. Y. (Contributor), Roe, A. (Contributor), Roe, S. (Contributor), Røhne, O. (Contributor), Rojo, V. (Contributor), Rolli, S. (Contributor), Romaniouk, A. (Contributor), Romanov, V. M. (Contributor), Romeo, G. (Contributor), Romero Maltrana, M. D. (Contributor), Roos, L. (Contributor), Ros, E. (Contributor), Rosati, S. (Contributor), Rosbach, K. (Contributor), Rose, M. (Contributor), Rosenbaum, G. A. (Contributor), Rosenberg, E. I. (Contributor), Rosendahl, P. L. (Contributor), Rosselet, L. (Contributor), Rossetti, V. (Contributor), Rossi, E. (Contributor), Rossi, L. P. (Contributor), Rossi, L. (Contributor), Rotaru, M. (Contributor), Roth, I. (Contributor), Rothberg, J. (Contributor), Rousseau, D. (Contributor), Royon, C. R. (Contributor), Rozanov, A. (Contributor), Rozen, Y. (Contributor), Ruan, X. (Contributor), Rubinskiy, I. (Contributor), Ruckert, B. (Contributor), Ruckstuhl, N. (Contributor), Rud, V. I. (Contributor), Rudolph, G. (Contributor), Rühr, F. (Contributor), Ruggieri, F. (Contributor), Ruiz-Martinez, A. (Contributor), Rulikowska-Zarebska, E. (Contributor), Rumiantsev, V. (Contributor), Rumyantsev, L. (Contributor), Runge, K. (Contributor), Runolfsson, O. (Contributor), Rurikova, Z. (Contributor), Rusakovich, N. A. (Contributor), Rust, D. R. (Contributor), Rutherfoord, J. P. (Contributor), Ruwiedel, C. (Contributor), Ruzicka, P. (Contributor), Ryabov, Y. F. (Contributor), Ryadovikov, V. (Contributor), Ryan, P. (Contributor), Rybar, M. (Contributor), Rybkin, G. (Contributor), Ryder, N. C. (Contributor), Rzaeva, S. (Contributor), Saavedra, A. F. (Contributor), Sadeh, I. (Contributor), Sadrozinski, H. (Contributor), Sadykov, R. (Contributor), Safai Tehrani, T. F. (Contributor), Sakamoto, H. (Contributor), Salamanna, G. (Contributor), Salamon, A. (Contributor), Saleem, M. (Contributor), Salihagic, D. (Contributor), Salnikov, A. (Contributor), Salt, J. (Contributor), Salvachua Ferrando, F. B. M. (Contributor), Salvatore, D. (Contributor), Salvatore, F. (Contributor), Salzburger, A. (Contributor), Sampsonidis, D. (Contributor), Samset, B. H. (Contributor), Sandaker, H. (Contributor), Sander, H. G. (Contributor), Sanders, M. P. (Contributor), Sandhoff, M. (Contributor), Sandhu, P. (Contributor), Sandoval, T. (Contributor), Sandstroem, R. (Contributor), Sandvoss, S. (Contributor), Sankey, D. P. C. (Contributor), Sansoni, A. (Contributor), Santamarina Rios, R. C. (Contributor), Santoni, C. (Contributor), Santonico, R. (Contributor), Santos, H. (Contributor), Saraiva, J. G. (Contributor), Sarangi, T. (Contributor), Sarkisyan-Grinbaum, E. (Contributor), Sarri, F. (Contributor), Sartisohn, G. (Contributor), Sasaki, O. (Contributor), Sasaki, T. (Contributor), Sasao, N. (Contributor), Satsounkevitch, I. (Contributor), Sauvage, G. (Contributor), Sauvan, J. B. (Contributor), Savard, P. (Contributor), Savinov, V. (Contributor), Savu, D. O. (Contributor), Savva, P. (Contributor), Sawyer, L. (Contributor), Saxon, D. H. (Contributor), Says, L. P. (Contributor), Sbarra, C. (Contributor), Sbrizzi, A. (Contributor), Scallon, O. (Contributor), Scannicchio, D. A. (Contributor), Schaarschmidt, J. (Contributor), Schacht, P. (Contributor), Schäfer, U. (Contributor), Schaepe, S. (Contributor), Schaetzel, S. (Contributor), Schaffer, A. C. (Contributor), Schaile, D. (Contributor), Schamberger, R. D. (Contributor), Schamov, A. G. (Contributor), Scharf, V. (Contributor), Schegelsky, V. A. (Contributor), Scheirich, D. (Contributor), Scherzer, M. I. (Contributor), Schiavi, C. (Contributor), Schieck, J. (Contributor), Schioppa, M. (Contributor), Schlenker, S. (Contributor), Schlereth, J. L. (Contributor), Schmidt, E. (Contributor), Schmidt, M. P. (Contributor), Schmieden, K. (Contributor), Schmitt, C. (Contributor), Schmitz, M. (Contributor), Schöning, A. (Contributor), Schott, M. (Contributor), Schouten, D. (Contributor), Schovancova, J. (Contributor), Schram, M. (Contributor), Schroeder, C. (Contributor), Schroer, N. (Contributor), Schuh, S. (Contributor), Schuler, G. (Contributor), Schultes, J. (Contributor), Schultz-Coulon, H. (Contributor), Schulz, H. (Contributor), Schumacher, J. W. (Contributor), Schumacher, M. (Contributor), Schumm, B. A. (Contributor), Schune, S. (Contributor), Schwanenberger, C. (Contributor), Schwartzman, A. (Contributor), Schwemling, S. (Contributor), Schwienhorst, R. (Contributor), Schwierz, R. (Contributor), Schwindling, J. (Contributor), Scott, W. G. (Contributor), Searcy, J. (Contributor), Sedykh, E. (Contributor), Segura, E. (Contributor), Seidel, S. C. (Contributor), Seiden, A. (Contributor), Seifert, F. (Contributor), Seixas, J. M. (Contributor), Sekhniaidze, G. (Contributor), Seliverstov, D. M. (Contributor), Sellden, B. (Contributor), Sellers, G. (Contributor), Seman, M. (Contributor), Semprini-Cesari, N. (Contributor), Serfon, C. (Contributor), Serin, L. (Contributor), Seuster, R. (Contributor), Severini, H. (Contributor), Sevior, M. E. (Contributor), Sfyrla, A. (Contributor), Shabalina, E. (Contributor), Shamim, M. (Contributor), Shan, L. Y. (Contributor), Shank, J. T. (Contributor), Shao, Q. T. (Contributor), Shapiro, M. (Contributor), Shatalov, P. B. (Contributor), Shaver, L. (Contributor), Shaw, C. (Contributor), Shaw, K. (Contributor), Sherman, D. (Contributor), Sherwood, P. (Contributor), Shibata, A. (Contributor), Shimizu, S. (Contributor), Shimojima, M. (Contributor), Shin, T. (Contributor), Shmeleva, A. (Contributor), Shochet, M. J. (Contributor), Short, D. (Contributor), Shupe, M. A. (Contributor), Sicho, P. (Contributor), Sidoti, A. (Contributor), Siebel, A. (Contributor), Siegert, F. (Contributor), Siegrist, J. (Contributor), Sijacki, S. (Contributor), Silbert, O. (Contributor), Silva, J. (Contributor), Silver, Y. (Contributor), Silverstein, D. (Contributor), Silverstein, S. B. (Contributor), Simak, V. (Contributor), Simard, O. (Contributor), Simic, S. (Contributor), Simion, S. (Contributor), Simmons, B. (Contributor), Simonyan, M. (Contributor), Sinervo, P. (Contributor), Sinev, N. B. (Contributor), Sipica, V. (Contributor), Siragusa, G. (Contributor), Sisakyan, A. N. (Contributor), Sivoklokov, S. (Contributor), Sjölin, J. (Contributor), Sjursen, T. B. (Contributor), Skinnari, L. A. (Contributor), Skovpen, K. (Contributor), Skubic, P. (Contributor), Skvorodnev, N. (Contributor), Slater, M. (Contributor), Slavicek, T. (Contributor), Sliwa, K. (Contributor), Sloan, T. J. (Contributor), Sloper, J. (Contributor), Smakhtin, V. (Contributor), Smirnov, S. (Contributor), Smirnova, L. N. (Contributor), Smirnova, O. (Contributor), Smith, B. C. (Contributor), Smith, D. (Contributor), Smith, K. M. (Contributor), Smizanska, M. (Contributor), Smolek, K. (Contributor), Snesarev, A. A. (Contributor), Snow, S. W. (Contributor), Snow, J. (Contributor), Snuverink, J. (Contributor), Snyder, S. (Contributor), Soares, M. (Contributor), Sobie, R. (Contributor), Sodomka, J. (Contributor), Soffer, A. (Contributor), Solans, C. A. (Contributor), Solar, M. (Contributor), Solc, J. (Contributor), Soldatov, E. (Contributor), Soldevila, U. (Contributor), Solfaroli Camillocci, C. E. (Contributor), Solodkov, A. A. (Contributor), Solovyanov, O. V. (Contributor), Sondericker, J. (Contributor), Soni, N. (Contributor), Sopko, V. (Contributor), Sopko, B. (Contributor), Sorbi, M. (Contributor), Sosebee, M. (Contributor), Soukharev, A. (Contributor), Spagnolo, S. (Contributor), Spanò, F. (Contributor), Spighi, R. (Contributor), Spigo, G. (Contributor), Spila, F. (Contributor), Spiriti, E. (Contributor), Spiwoks, R. (Contributor), Spousta, M. (Contributor), Spreitzer, T. (Contributor), Spurlock, B. (Contributor), St Denis, D. R. D. (Contributor), Stahl, T. (Contributor), Stahlman, J. (Contributor), Stamen, R. (Contributor), Stanecka, E. (Contributor), Stanek, R. W. (Contributor), Stanescu, C. (Contributor), Stapnes, S. (Contributor), Starchenko, E. A. (Contributor), Stark, J. (Contributor), Staroba, P. (Contributor), Starovoitov, P. (Contributor), Staude, A. (Contributor), Stavina, P. (Contributor), Stavropoulos, G. (Contributor), Steele, G. (Contributor), Steinbach, P. (Contributor), Steinberg, P. (Contributor), Stekl, I. (Contributor), Stelzer, B. (Contributor), Stelzer, H. J. (Contributor), Stelzer-Chilton, O. (Contributor), Stenzel, H. (Contributor), Stevenson, K. (Contributor), Stewart, G. A. (Contributor), Stillings, J. A. (Contributor), Stockmanns, T. (Contributor), Stockton, M. C. (Contributor), Stoerig, K. (Contributor), Stoicea, G. (Contributor), Stonjek, S. (Contributor), Strachota, P. (Contributor), Stradling, A. R. (Contributor), Straessner, A. (Contributor), Strandberg, J. (Contributor), Strandberg, S. (Contributor), Strandlie, A. (Contributor), Strang, M. (Contributor), Strauss, E. (Contributor), Strauss, M. (Contributor), Strizenec, P. (Contributor), Ströhmer, R. (Contributor), Strom, D. M. (Contributor), Strong, J. A. (Contributor), Stroynowski, R. (Contributor), Strube, J. (Contributor), Stugu, B. (Contributor), Stumer, I. (Contributor), Stupak, J. (Contributor), Sturm, P. (Contributor), Soh, D. A. (Contributor), Su, D. (Contributor), Subramania, H. S. (Contributor), Succurro, A. (Contributor), Sugaya, Y. (Contributor), Sugimoto, T. (Contributor), Suhr, C. (Contributor), Suita, K. (Contributor), Suk, M. (Contributor), Sulin, V. V. (Contributor), Sultansoy, S. (Contributor), Sumida, T. (Contributor), Sun, X. (Contributor), Sundermann, J. E. (Contributor), Suruliz, K. (Contributor), Sushkov, S. (Contributor), Susinno, G. (Contributor), Sutton, M. R. (Contributor), Suzuki, Y. (Contributor), Svatos, M. (Contributor), Sviridov, S. (Contributor), Swedish, S. (Contributor), Sykora, I. (Contributor), Sykora, T. (Contributor), Szeless, B. (Contributor), Sánchez, J. (Contributor), Ta, D. (Contributor), Tackmann, K. (Contributor), Taffard, A. (Contributor), Tafirout, R. (Contributor), Taga, A. (Contributor), Taiblum, N. (Contributor), Takahashi, Y. (Contributor), Takai, H. (Contributor), Takashima, R. (Contributor), Takeda, H. (Contributor), Takeshita, T. (Contributor), Talby, M. (Contributor), Talyshev, A. (Contributor), Tamsett, M. C. (Contributor), Tanaka, J. (Contributor), Tanaka, R. (Contributor), Tanaka, S. (Contributor), Tanaka, S. (Contributor), Tanaka, Y. (Contributor), Tani, K. (Contributor), Tannoury, N. (Contributor), Tappern, G. P. (Contributor), Tapprogge, S. (Contributor), Tardif, D. (Contributor), Tarem, S. (Contributor), Tarrade, F. (Contributor), Tartarelli, G. F. (Contributor), Tas, P. (Contributor), Tasevsky, M. (Contributor), Tassi, E. (Contributor), Tatarkhanov, M. (Contributor), Taylor, C. (Contributor), Taylor, F. E. (Contributor), Taylor, G. N. (Contributor), Taylor, W. (Contributor), Teixeira Dias Castanheira, D. C. M. (Contributor), Teixeira-Dias, P. (Contributor), Temming, K. K. (Contributor), Ten Kate, K. H. (Contributor), Teng, P. K. (Contributor), Terada, S. (Contributor), Terashi, K. (Contributor), Terron, J. (Contributor), Terwort, M. (Contributor), Testa, M. (Contributor), Teuscher, R. J. (Contributor), Tevlin, C. M. (Contributor), Thadome, J. (Contributor), Therhaag, J. (Contributor), Theveneaux-Pelzer, T. (Contributor), Thioye, M. (Contributor), Thoma, S. (Contributor), Thomas, J. P. (Contributor), Thompson, E. N. (Contributor), Thompson, P. D. (Contributor), Thompson, P. D. (Contributor), Thompson, A. S. (Contributor), Thomson, E. (Contributor), Thomson, M. (Contributor), Thun, R. P. (Contributor), Tic, T. (Contributor), Tikhomirov, V. O. (Contributor), Tikhonov, Y. A. (Contributor), Timmermans, C. J. W. P. (Contributor), Tipton, P. (Contributor), Tique Aires Viegas, A. V. F. J. (Contributor), Tisserant, S. (Contributor), Tobias, J. (Contributor), Toczek, B. (Contributor), Todorov, T. (Contributor), Todorova-Nova, S. (Contributor), Toggerson, B. (Contributor), Tojo, J. (Contributor), Tokár, S. (Contributor), Tokunaga, K. (Contributor), Tokushuku, K. (Contributor), Tollefson, K. (Contributor), Tomoto, M. (Contributor), Tompkins, L. (Contributor), Toms, K. (Contributor), Tong, G. (Contributor), Tonoyan, A. (Contributor), Topfel, C. (Contributor), Topilin, N. D. (Contributor), Torchiani, I. (Contributor), Torrence, E. (Contributor), Torrøpastor, E. (Contributor), Toth, J. (Contributor), Touchard, F. (Contributor), Tovey, D. R. (Contributor), Traynor, D. (Contributor), Trefzger, T. (Contributor), Treis, J. (Contributor), Tremblet, L. (Contributor), Tricoli, A. (Contributor), Trigger, I. M. (Contributor), Trincaz-Duvoid, S. (Contributor), Trinh, T. N. (Contributor), Tripiana, M. F. (Contributor), Triplett, N. (Contributor), Trischuk, W. (Contributor), Trivedi, A. (Contributor), Trocmé, B. (Contributor), Troncon, C. (Contributor), Trottier-Mcdonald, M. (Contributor), Trzupek, A. (Contributor), Tsarouchas, C. (Contributor), Tseng, J. (Contributor), Tsiakiris, M. (Contributor), Tsiareshka, P. V. (Contributor), Tsionou, D. (Contributor), Tsipolitis, G. (Contributor), Tsiskaridze, V. (Contributor), Tskhadadze, E. G. (Contributor), Tsukerman, I. I. (Contributor), Tsulaia, V. (Contributor), Tsung, J. (Contributor), Tsuno, S. (Contributor), Tsybychev, D. (Contributor), Tua, A. (Contributor), Tuggle, J. M. (Contributor), Turala, M. (Contributor), Turecek, D. (Contributor), Turk Cakir, C. I. (Contributor), Turlay, E. (Contributor), Turra, R. (Contributor), Tuts, P. M. (Contributor), Tykhonov, A. (Contributor), Tylmad, M. (Contributor), Tyndel, M. (Contributor), Tyrvainen, H. (Contributor), Tzanakos, G. (Contributor), Uchida, K. (Contributor), Ueda, I. (Contributor), Ueno, R. (Contributor), Ugland, M. (Contributor), Uhlenbrock, M. (Contributor), Uhrmacher, M. (Contributor), Ukegawa, F. (Contributor), Unal, G. (Contributor), Underwood, D. G. (Contributor), Undrus, A. (Contributor), Unel, G. (Contributor), Unno, Y. (Contributor), Urbaniec, D. (Contributor), Urkovsky, E. (Contributor), Urrejola, P. (Contributor), Usai, G. (Contributor), Uslenghi, M. (Contributor), Vacavant, L. (Contributor), Vacek, V. (Contributor), Vachon, B. (Contributor), Vahsen, S. (Contributor), Valderanis, C. (Contributor), Valenta, J. (Contributor), Valente, P. (Contributor), Valentinetti, S. (Contributor), Valkar, S. (Contributor), Valladolid Gallego, G. E. (Contributor), Vallecorsa, S. (Contributor), Valls Ferrer, F. J. A. (Contributor), Van Der Graaf, D. G. H. (Contributor), Van Der Kraaij, D. K. E. (Contributor), Van Der Leeuw, D. L. R. (Contributor), Van Der Poel, D. P. E. (Contributor), Van Der Ster, D. S. D. (Contributor), Van Eijk, E. B. (Contributor), Van Eldik, E. N. (Contributor), Van Gemmeren, G. P. (Contributor), Van Kesteren, K. Z. (Contributor), Van Vulpen, V. I. (Contributor), Vandelli, W. (Contributor), Vandoni, G. (Contributor), Vaniachine, A. (Contributor), Vankov, P. (Contributor), Vannucci, F. (Contributor), Varela Rodriguez, R. F. (Contributor), Vari, R. (Contributor), Varnes, E. W. (Contributor), Varouchas, D. (Contributor), Vartapetian, A. (Contributor), Varvell, K. E. (Contributor), Vassilakopoulos, V. I. (Contributor), Vazeille, F. (Contributor), Vegni, G. (Contributor), Veillet, J. J. (Contributor), Vellidis, C. (Contributor), Veloso, F. (Contributor), Veness, R. (Contributor), Veneziano, S. (Contributor), Ventura, A. (Contributor), Ventura, D. (Contributor), Venturi, M. (Contributor), Venturi, N. (Contributor), Vercesi, V. (Contributor), Verducci, M. (Contributor), Verkerke, W. (Contributor), Vermeulen, J. C. (Contributor), Vest, A. (Contributor), Vetterli, M. C. (Contributor), Vichou, I. (Contributor), Vickey, T. (Contributor), Viehhauser, G. H. A. (Contributor), Viel, S. (Contributor), Villa, M. (Contributor), Villaplana Perez, P. M. (Contributor), Vilucchi, E. (Contributor), Vincter, M. G. (Contributor), Vinek, E. (Contributor), Vinogradov, V. B. (Contributor), Virchaux, M. (Contributor), Viret, S. (Contributor), Virzi, J. (Contributor), Vitale, A. (Contributor), Vitells, O. (Contributor), Viti, M. (Contributor), Vivarelli, I. (Contributor), Vives Vaque, V. F. (Contributor), Vlachos, S. (Contributor), Vlasak, M. (Contributor), Vlasov, N. (Contributor), Vogel, A. (Contributor), Vokac, P. (Contributor), Volpi, G. (Contributor), Volpi, M. (Contributor), Volpini, G. (Contributor), Von Der Schmitt, D. S. H. (Contributor), Von Loeben, L. J. (Contributor), Von Radziewski, R. H. (Contributor), Von Toerne, T. E. (Contributor), Vorobel, V. (Contributor), Vorobiev, A. P. (Contributor), Vorwerk, V. (Contributor), Vos, M. (Contributor), Voss, R. (Contributor), Voss, T. T. (Contributor), Vossebeld, J. H. (Contributor), Vovenko, A. S. (Contributor), Vranjes, N. (Contributor), Vranjes Milosavljevic, M. M. (Contributor), Vrba, V. (Contributor), Vreeswijk, M. (Contributor), Vu Anh, A. T. (Contributor), Vuillermet, R. (Contributor), Vukotic, I. (Contributor), Wagner, W. (Contributor), Wagner, P. (Contributor), Wahlen, H. (Contributor), Wakabayashi, J. (Contributor), Walbersloh, J. (Contributor), Walch, S. (Contributor), Walder, J. (Contributor), Walker, R. (Contributor), Walkowiak, W. (Contributor), Wall, R. (Contributor), Waller, P. (Contributor), Wang, C. (Contributor), Wang, H. (Contributor), Wang, H. (Contributor), Wang, J. (Contributor), Wang, J. (Contributor), Wang, J. C. (Contributor), Wang, R. (Contributor), Wang, S. M. (Contributor), Warburton, A. (Contributor), Ward, C. P. (Contributor), Warsinsky, M. (Contributor), Watkins, P. M. (Contributor), Watson, A. T. (Contributor), Watson, M. F. (Contributor), Watts, G. (Contributor), Watts, S. (Contributor), Waugh, A. T. (Contributor), Waugh, B. M. (Contributor), Weber, J. (Contributor), Weber, M. (Contributor), Weber, M. S. (Contributor), Weber, P. (Contributor), Weidberg, A. R. (Contributor), Weigell, P. (Contributor), Weingarten, J. (Contributor), Weiser, C. (Contributor), Wellenstein, H. (Contributor), Wells, P. S. (Contributor), Wen, M. (Contributor), Wenaus, T. (Contributor), Wendler, S. (Contributor), Weng, Z. (Contributor), Wengler, T. (Contributor), Wenig, S. (Contributor), Wermes, N. (Contributor), Werner, M. (Contributor), Werner, P. (Contributor), Werth, M. (Contributor), Wessels, M. (Contributor), Weydert, C. (Contributor), Whalen, K. (Contributor), Wheeler-Ellis, S. J. (Contributor), Whitaker, S. P. (Contributor), White, A. (Contributor), White, M. J. (Contributor), White, S. (Contributor), Whitehead, S. R. (Contributor), Whiteson, D. (Contributor), Whittington, D. (Contributor), Wicek, F. (Contributor), Wicke, D. (Contributor), Wickens, F. J. (Contributor), Wiedenmann, W. (Contributor), Wielers, M. (Contributor), Wienemann, P. (Contributor), Wiglesworth, C. (Contributor), Wiik, L. A. M. (Contributor), Wijeratne, P. A. (Contributor), Wildauer, A. (Contributor), Wildt, M. A. (Contributor), Wilhelm, I. (Contributor), Wilkens, H. G. (Contributor), Will, J. Z. (Contributor), Williams, E. (Contributor), Williams, H. H. (Contributor), Willis, W. (Contributor), Willocq, S. (Contributor), Wilson, J. A. (Contributor), Wilson, M. G. (Contributor), Wilson, A. (Contributor), Wingerter-Seez, I. (Contributor), Winkelmann, S. (Contributor), Winklmeier, F. (Contributor), Wittgen, M. (Contributor), Wolter, M. W. (Contributor), Wolters, H. (Contributor), Wooden, G. (Contributor), Wosiek, B. K. (Contributor), Wotschack, J. (Contributor), Woudstra, M. J. (Contributor), Wraight, K. (Contributor), Wright, C. (Contributor), Wrona, B. (Contributor), Wu, S. L. (Contributor), Wu, X. (Contributor), Wu, Y. (Contributor), Wulf, E. (Contributor), Wunstorf, R. (Contributor), Wynne, B. M. (Contributor), Xaplanteris, L. (Contributor), Xella, S. (Contributor), Xie, S. (Contributor), Xie, Y. (Contributor), Xu, C. (Contributor), Xu, D. (Contributor), Xu, G. (Contributor), Yabsley, B. (Contributor), Yamada, M. (Contributor), Yamamoto, A. (Contributor), Yamamoto, K. (Contributor), Yamamoto, S. (Contributor), Yamamura, T. (Contributor), Yamaoka, J. (Contributor), Yamazaki, T. (Contributor), Yamazaki, Y. (Contributor), Yan, Z. (Contributor), Yang, H. (Contributor), Yang, U. K. (Contributor), Yang, Y. (Contributor), Yang, Y. (Contributor), Yang, Z. (Contributor), Yanush, S. (Contributor), Yao, W. (Contributor), Yao, Y. (Contributor), Yasu, Y. (Contributor), Ybeles Smit, S. G. V. (Contributor), Ye, J. (Contributor), Ye, S. (Contributor), Yilmaz, M. (Contributor), Yoosoofmiya, R. (Contributor), Yorita, K. (Contributor), Yoshida, R. (Contributor), Young, C. (Contributor), Youssef, S. (Contributor), Yu, D. (Contributor), Yu, J. (Contributor), Yu, J. (Contributor), Yuan, L. (Contributor), Yurkewicz, A. (Contributor), Zaets, V. G. (Contributor), Zaidan, R. (Contributor), Zaitsev, A. M. (Contributor), Zajacova, Z. (Contributor), Zalite, Z. (Contributor), Zanello, L. (Contributor), Zarzhitsky, P. (Contributor), Zaytsev, A. (Contributor), Zeitnitz, C. (Contributor), Zeller, M. (Contributor), Zema, P. F. (Contributor), Zemla, A. (Contributor), Zendler, C. (Contributor), Zenin, A. V. (Contributor), Zenin, O. (Contributor), Ženiš, T. (Contributor), Zenonos, Z. (Contributor), Zenz, S. (Contributor), Zerwas, D. (Contributor), Zevi Della Porta, D. P. G. (Contributor), Zhan, Z. (Contributor), Zhang, D. (Contributor), Zhang, H. (Contributor), Zhang, J. (Contributor), Zhang, X. (Contributor), Zhang, Z. (Contributor), Zhao, L. (Contributor), Zhao, T. (Contributor), Zhao, Z. (Contributor), Zhemchugov, A. (Contributor), Zheng, S. (Contributor), Zhong, J. (Contributor), Zhou, B. (Contributor), Zhou, N. (Contributor), Zhou, Y. (Contributor), Zhu, C. G. (Contributor), Zhu, H. (Contributor), Zhu, Y. (Contributor), Zhuang, X. (Contributor), Zhuravlov, V. (Contributor), Zieminska, D. (Contributor), Zimmermann, R. (Contributor), Zimmermann, S. (Contributor), Zimmermann, S. (Contributor), Ziolkowski, M. (Contributor), Zitoun, R. (Contributor), Živković, L. (Contributor), Zmouchko, V. V. (Contributor), Zobernig, G. (Contributor), Zoccoli, A. (Contributor), Zolnierowski, Y. (Contributor), Zsenei, A. (Contributor), Zur Nedden, N. M. (Contributor), Zutshi, V. (Contributor), Zwalinski, L. (Contributor) & Collaboration, T. A. (Creator), HEPData, 2014 DOI: 10.17182/hepdata.58283.v1, https://www.hepdata.net/record/ins894867%3Fversion=1 A search for the $Z\gamma$ decay mode of the Higgs boson in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector Aad, G. (Contributor), Abbott, B. (Contributor), Abbott, D. C. (Contributor), Abed Abud, A. A. (Contributor), Abeling, K. (Contributor), Abhayasinghe, D. K. (Contributor), Abidi, S. H. (Contributor), AbouZeid, O. S. (Contributor), Abraham, N. L. (Contributor), Abramowicz, H. (Contributor), Abreu, H. (Contributor), Abulaiti, Y. (Contributor), Acharya, B. S. (Contributor), Achkar, B. (Contributor), Adam, L. (Contributor), Adam Bourdarios, B. C. (Contributor), Adamczyk, L. (Contributor), Adamek, L. (Contributor), Adelman, J. (Contributor), Adersberger, M. (Contributor), Adiguzel, A. (Contributor), Adorni, S. (Contributor), Adye, T. (Contributor), Affolder, A. A. (Contributor), Afik, Y. (Contributor), Agapopoulou, C. (Contributor), Agaras, M. N. (Contributor), Aggarwal, A. (Contributor), Agheorghiesei, C. (Contributor), Aguilar-Saavedra, J. A. (Contributor), Ahmad, A. (Contributor), Ahmadov, F. (Contributor), Ahmed, W. S. (Contributor), Ai, X. (Contributor), Aielli, G. (Contributor), Akatsuka, S. (Contributor), Akbiyik, M. (Contributor), Åkesson, T. P. A. (Contributor), Akilli, E. (Contributor), Akimov, A. V. (Contributor), Al Khoury, K. K. (Contributor), Alberghi, G. L. (Contributor), Albert, J. (Contributor), Alconada Verzini, V. M. J. (Contributor), Alderweireldt, S. (Contributor), Aleksa, M. (Contributor), Aleksandrov, I. N. (Contributor), Alexa, C. (Contributor), Alexopoulos, T. (Contributor), Alfonsi, A. (Contributor), Alfonsi, F. (Contributor), Alhroob, M. (Contributor), Ali, B. (Contributor), Ali, S. (Contributor), Aliev, M. (Contributor), Alimonti, G. (Contributor), Allaire, C. (Contributor), Allbrooke, B. M. M. (Contributor), Allen, B. W. (Contributor), Allport, P. P. (Contributor), Aloisio, A. (Contributor), Alonso, F. (Contributor), Alpigiani, C. (Contributor), Alunno Camelia, C. E. (Contributor), Alvarez Estevez, E. M. (Contributor), Alviggi, M. G. (Contributor), Amaral Coutinho, C. Y. (Contributor), Ambler, A. (Contributor), Ambroz, L. (Contributor), Amelung, C. (Contributor), Amidei, D. (Contributor), Amor Dos Santos, D. S. S. P. (Contributor), Amoroso, S. (Contributor), Amrouche, C. S. (Contributor), An, F. (Contributor), Anastopoulos, C. (Contributor), Andari, N. (Contributor), Andeen, T. (Contributor), Anders, J. K. (Contributor), Andrean, S. Y. (Contributor), Andreazza, A. (Contributor), Andrei, V. (Contributor), Anelli, C. R. (Contributor), Angelidakis, S. (Contributor), Angerami, A. (Contributor), Anisenkov, A. V. (Contributor), Annovi, A. (Contributor), Antel, C. (Contributor), Anthony, M. T. (Contributor), Antipov, E. (Contributor), Antonelli, M. (Contributor), Antrim, D. J. A. (Contributor), Anulli, F. (Contributor), Aoki, M. (Contributor), Aparisi Pozo, P. J. A. (Contributor), Aparo, M. A. (Contributor), Aperio Bella, B. L. (Contributor), Aranzabal Barrio, B. N. (Contributor), Araujo Ferraz, F. V. (Contributor), Araujo Pereira, P. R. (Contributor), Arcangeletti, C. (Contributor), Arce, A. T. H. (Contributor), Arduh, F. A. (Contributor), Arguin, J. (Contributor), Argyropoulos, S. (Contributor), Arling, J. (Contributor), Armbruster, A. J. (Contributor), Armstrong, A. (Contributor), Arnaez, O. (Contributor), Arnold, H. (Contributor), Arrubarrena Tame, T. Z. P. (Contributor), Artoni, G. (Contributor), Asai, K. (Contributor), Asai, S. (Contributor), Asawatavonvanich, T. (Contributor), Asbah, N. (Contributor), Asimakopoulou, E. M. (Contributor), Asquith, L. (Contributor), Assahsah, J. (Contributor), Assamagan, K. (Contributor), Astalos, R. (Contributor), Atkin, R. J. (Contributor), Atkinson, M. (Contributor), Atlay, N. B. (Contributor), Atmani, H. (Contributor), Augsten, K. (Contributor), Austrup, V. A. (Contributor), Avolio, G. (Contributor), Ayoub, M. K. (Contributor), Azuelos, G. (Contributor), Bachacou, H. (Contributor), Bachas, K. (Contributor), Backes, M. (Contributor), Backman, F. (Contributor), Bagnaia, P. (Contributor), Bahmani, M. (Contributor), Bahrasemani, H. (Contributor), Bailey, A. J. (Contributor), Bailey, V. R. (Contributor), Baines, J. T. (Contributor), Bakalis, C. (Contributor), Baker, O. K. (Contributor), Bakker, P. J. (Contributor), Bakos, E. (Contributor), Bakshi Gupta, G. D. (Contributor), Balaji, S. (Contributor), Baldin, E. M. (Contributor), Balek, P. (Contributor), Balli, F. (Contributor), Balunas, W. K. (Contributor), Balz, J. (Contributor), Banas, E. (Contributor), Bandieramonte, M. (Contributor), Bandyopadhyay, A. (Contributor), Banerjee, S. (Contributor), Barak, L. (Contributor), Barbe, W. M. (Contributor), Barberio, E. L. (Contributor), Barberis, D. (Contributor), Barbero, M. (Contributor), Barbour, G. (Contributor), Barillari, T. (Contributor), Barisits, M. (Contributor), Barkeloo, J. (Contributor), Barklow, T. (Contributor), Barnea, R. (Contributor), Barnett, B. M. (Contributor), Barnett, R. M. (Contributor), Barnovska-Blenessy, Z. (Contributor), Baroncelli, A. (Contributor), Barone, G. (Contributor), Barr, A. J. (Contributor), Barranco Navarro, N. L. (Contributor), Barreiro, F. (Contributor), Barreiro Guimarães da Costa, G. D. C. J. (Contributor), Barron, U. (Contributor), Barsov, S. (Contributor), Bartels, F. (Contributor), Bartoldus, R. (Contributor), Bartolini, G. (Contributor), Barton, A. E. (Contributor), Bartos, P. (Contributor), Basalaev, A. (Contributor), Basan, A. (Contributor), Bassalat, A. (Contributor), Basso, M. J. (Contributor), Bates, R. L. (Contributor), Batlamous, S. (Contributor), Batley, J. R. (Contributor), Batool, B. (Contributor), Battaglia, M. (Contributor), Bauce, M. (Contributor), Bauer, F. (Contributor), Bauer, K. T. (Contributor), Bauer, P. (Contributor), Bawa, H. S. (Contributor), Bayirli, A. (Contributor), Beacham, J. B. (Contributor), Beau, T. (Contributor), Beauchemin, P. H. (Contributor), Becherer, F. (Contributor), Bechtle, P. (Contributor), Beck, H. C. (Contributor), Beck, H. P. (Contributor), Becker, K. (Contributor), Becot, C. (Contributor), Beddall, A. (Contributor), Beddall, A. J. (Contributor), Bednyakov, V. A. (Contributor), Bedognetti, M. (Contributor), Bee, C. P. (Contributor), Beermann, T. A. (Contributor), Begalli, M. (Contributor), Begel, M. (Contributor), Behera, A. (Contributor), Behr, J. K. (Contributor), Beisiegel, F. (Contributor), Belfkir, M. (Contributor), Bell, A. S. (Contributor), Bella, G. (Contributor), Bellagamba, L. (Contributor), Bellerive, A. (Contributor), Bellos, P. (Contributor), Beloborodov, K. (Contributor), Belotskiy, K. (Contributor), Belyaev, N. L. (Contributor), Benchekroun, D. (Contributor), Benekos, N. (Contributor), Benhammou, Y. (Contributor), Benjamin, D. P. (Contributor), Benoit, M. (Contributor), Bensinger, J. R. (Contributor), Bentvelsen, S. (Contributor), Beresford, L. (Contributor), Beretta, M. (Contributor), Berge, D. (Contributor), Bergeaas Kuutmann, K. E. (Contributor), Berger, N. (Contributor), Bergmann, B. (Contributor), Bergsten, L. J. (Contributor), Beringer, J. (Contributor), Berlendis, S. (Contributor), Bernardi, G. (Contributor), Bernius, C. (Contributor), Bernlochner, F. U. (Contributor), Berry, T. (Contributor), Berta, P. (Contributor), Bertella, C. (Contributor), Berthold, A. (Contributor), Bertram, I. A. (Contributor), Bessidskaia Bylund, B. O. (Contributor), Besson, N. (Contributor), Bethani, A. (Contributor), Bethke, S. (Contributor), Betti, A. (Contributor), Bevan, A. J. (Contributor), Beyer, J. (Contributor), Bhattacharya, D. S. (Contributor), Bhattarai, P. (Contributor), Bhopatkar, V. S. (Contributor), Bi, R. (Contributor), Bianchi, R. M. (Contributor), Biebel, O. (Contributor), Biedermann, D. (Contributor), Bielski, R. (Contributor), Bierwagen, K. (Contributor), Biesuz, N. V. (Contributor), Biglietti, M. (Contributor), Billoud, T. R. V. (Contributor), Bindi, M. (Contributor), Bingul, A. (Contributor), Bini, C. (Contributor), Biondi, S. (Contributor), Birch-sykes, C. J. (Contributor), Birman, M. (Contributor), Bisanz, T. (Contributor), Biswal, J. P. (Contributor), Biswas, D. (Contributor), Bitadze, A. (Contributor), Bittrich, C. (Contributor), Bjørke, K. (Contributor), Blazek, T. (Contributor), Bloch, I. (Contributor), Blocker, C. (Contributor), Blue, A. (Contributor), Blumenschein, U. (Contributor), Bobbink, G. J. (Contributor), Bobrovnikov, V. S. (Contributor), Bocchetta, S. S. (Contributor), Boerner, D. (Contributor), Bogavac, D. (Contributor), Bogdanchikov, A. G. (Contributor), Bohm, C. (Contributor), Boisvert, V. (Contributor), Bokan, P. (Contributor), Bold, T. (Contributor), Bolz, A. E. (Contributor), Bomben, M. (Contributor), Bona, M. (Contributor), Bonilla, J. S. (Contributor), Boonekamp, M. (Contributor), Booth, C. D. (Contributor), Borbély, A. G. (Contributor), Borecka-Bielska, H. M. (Contributor), Borgna, L. S. (Contributor), Borisov, A. (Contributor), Borissov, G. (Contributor), Bortoletto, D. (Contributor), Boscherini, D. (Contributor), Bosman, M. (Contributor), Bossio Sola, S. J. D. (Contributor), Bouaouda, K. (Contributor), Boudreau, J. (Contributor), Bouhova-Thacker, E. V. (Contributor), Boumediene, D. (Contributor), Boutle, S. K. (Contributor), Boveia, A. (Contributor), Boyd, J. (Contributor), Boye, D. (Contributor), Boyko, I. R. (Contributor), Bozson, A. J. (Contributor), Bracinik, J. (Contributor), Brahimi, N. (Contributor), Brandt, G. (Contributor), Brandt, O. (Contributor), Braren, F. (Contributor), Brau, B. (Contributor), Brau, J. E. (Contributor), Breaden Madden, M. W. D. (Contributor), Brendlinger, K. (Contributor), Brener, R. (Contributor), Brenner, L. (Contributor), Brenner, R. (Contributor), Bressler, S. (Contributor), Brickwedde, B. (Contributor), Briglin, D. L. (Contributor), Britton, D. (Contributor), Britzger, D. (Contributor), Brock, I. (Contributor), Brock, R. (Contributor), Brooijmans, G. (Contributor), Brooks, W. K. (Contributor), Brost, E. (Contributor), Bruckman de Renstrom, D. R. P. A. (Contributor), Brüers, B. (Contributor), Bruncko, D. (Contributor), Bruni, A. (Contributor), Bruni, G. (Contributor), Bruni, L. S. (Contributor), Bruno, S. (Contributor), Bruschi, M. (Contributor), Bruscino, N. (Contributor), Bryngemark, L. (Contributor), Buanes, T. (Contributor), Buat, Q. (Contributor), Buchholz, P. (Contributor), Buckley, A. G. (Contributor), Budagov, I. A. (Contributor), Bugge, M. K. (Contributor), Bührer, F. (Contributor), Bulekov, O. (Contributor), Bullard, B. A. (Contributor), Burch, T. J. (Contributor), Burdin, S. (Contributor), Burgard, C. D. (Contributor), Burger, A. M. (Contributor), Burghgrave, B. (Contributor), Burr, J. T. P. (Contributor), Burton, C. D. (Contributor), Burzynski, J. C. (Contributor), Büscher, V. (Contributor), Buschmann, E. (Contributor), Bussey, P. J. (Contributor), Butler, J. M. (Contributor), Buttar, C. M. (Contributor), Butterworth, J. M. (Contributor), Butti, P. (Contributor), Buttinger, W. (Contributor), Buxo Vazquez, V. C. J. (Contributor), Buzatu, A. (Contributor), Buzykaev, A. R. (Contributor), Cabras, G. (Contributor), Cabrera Urbán, U. S. (Contributor), Caforio, D. (Contributor), Cai, H. (Contributor), Cairo, V. M. M. (Contributor), Cakir, O. (Contributor), Calace, N. (Contributor), Calafiura, P. (Contributor), Calderini, G. (Contributor), Calfayan, P. (Contributor), Callea, G. (Contributor), Caloba, L. P. (Contributor), Caltabiano, A. (Contributor), Calvente Lopez, L. S. (Contributor), Calvet, D. (Contributor), Calvet, S. (Contributor), Calvet, T. P. (Contributor), Calvetti, M. (Contributor), Camacho Toro, T. R. (Contributor), Camarda, S. (Contributor), Camarero Munoz, M. D. (Contributor), Camarri, P. (Contributor), Camerlingo, M. T. (Contributor), Cameron, D. (Contributor), Camincher, C. (Contributor), Campana, S. (Contributor), Campanelli, M. (Contributor), Camplani, A. (Contributor), Canale, V. (Contributor), Canesse, A. (Contributor), Cano Bret, B. M. (Contributor), Cantero, J. (Contributor), Cao, T. (Contributor), Cao, Y. (Contributor), Capeans Garrido, G. M. D. M. (Contributor), Capua, M. (Contributor), Cardarelli, R. (Contributor), Cardillo, F. (Contributor), Carducci, G. (Contributor), Carli, I. (Contributor), Carli, T. (Contributor), Carlino, G. (Contributor), Carlson, B. T. (Contributor), Carlson, E. M. (Contributor), Carminati, L. (Contributor), Carney, R. M. D. (Contributor), Caron, S. (Contributor), Carquin, E. (Contributor), Carrá, S. (Contributor), Carratta, G. (Contributor), Carter, J. W. S. (Contributor), Carter, T. M. (Contributor), Casado, M. P. (Contributor), Casha, A. F. (Contributor), Castillo, F. L. (Contributor), Castillo Garcia, G. L. (Contributor), Castillo Gimenez, G. V. (Contributor), Castro, N. F. (Contributor), Catinaccio, A. (Contributor), Catmore, J. R. (Contributor), Cattai, A. (Contributor), Cavaliere, V. (Contributor), Cavasinni, V. (Contributor), Celebi, E. (Contributor), Celli, F. (Contributor), Cerny, K. (Contributor), Cerqueira, A. S. (Contributor), Cerri, A. (Contributor), Cerrito, L. (Contributor), Cerutti, F. (Contributor), Cervelli, A. (Contributor), Cetin, S. A. (Contributor), Chadi, Z. (Contributor), Chakraborty, D. (Contributor), Chan, J. (Contributor), Chan, W. S. (Contributor), Chan, W. Y. (Contributor), Chapman, J. D. (Contributor), Chargeishvili, B. (Contributor), Charlton, D. G. (Contributor), Charman, T. P. (Contributor), Chau, C. C. (Contributor), Che, S. (Contributor), Chekanov, S. (Contributor), Chekulaev, S. V. (Contributor), Chelkov, G. A. (Contributor), Chen, B. (Contributor), Chen, C. (Contributor), Chen, C. H. (Contributor), Chen, H. (Contributor), Chen, J. (Contributor), Chen, S. (Contributor), Chen, S. J. (Contributor), Chen, X. (Contributor), Chen, Y. (Contributor), Chen, Y. (Contributor), Cheng, H. C. (Contributor), Cheng, H. J. (Contributor), Cheplakov, A. (Contributor), Cheremushkina, E. (Contributor), Cherkaoui El Moursli, E. M. R. (Contributor), Cheu, E. C. (Contributor), Cheung, K. (Contributor), Chevalérias, T. J. A. (Contributor), Chevalier, L. (Contributor), Chiarella, V. (Contributor), Chiarelli, G. (Contributor), Chiodini, G. (Contributor), Chisholm, A. S. (Contributor), Chitan, A. (Contributor), Chiu, I. (Contributor), Chiu, Y. H. (Contributor), Chizhov, M. V. (Contributor), Choi, K. (Contributor), Chomont, A. R. (Contributor), Chow, Y. S. (Contributor), Christopher, L. D. (Contributor), Chu, M. C. (Contributor), Chu, X. (Contributor), Chudoba, J. (Contributor), Chwastowski, J. J. (Contributor), Chytka, L. (Contributor), Cieri, D. (Contributor), Ciesla, K. M. (Contributor), Cinca, D. (Contributor), Cindro, V. (Contributor), Cioară, I. A. (Contributor), Ciocio, A. (Contributor), Cirotto, F. (Contributor), Citron, Z. H. (Contributor), Citterio, M. (Contributor), Ciubotaru, D. A. (Contributor), Ciungu, B. M. (Contributor), Clark, A. (Contributor), Clark, M. R. (Contributor), Clark, P. J. (Contributor), Clawson, S. E. (Contributor), Clement, C. (Contributor), Coadou, Y. (Contributor), Cobal, M. (Contributor), Coccaro, A. (Contributor), Cochran, J. (Contributor), Coelho Lopes De Sa, L. D. S. R. (Contributor), Cohen, H. (Contributor), Coimbra, A. E. C. (Contributor), Cole, B. (Contributor), Colijn, A. P. (Contributor), Collot, J. (Contributor), Conde Muiño, M. P. (Contributor), Connell, S. H. (Contributor), Connelly, I. A. (Contributor), Constantinescu, S. (Contributor), Conventi, F. (Contributor), Cooper-Sarkar, A. M. (Contributor), Cormier, F. (Contributor), Cormier, K. J. R. (Contributor), Corpe, L. D. (Contributor), Corradi, M. (Contributor), Corrigan, E. E. (Contributor), Corriveau, F. (Contributor), Costa, M. J. (Contributor), Costanza, F. (Contributor), Costanzo, D. (Contributor), Cowan, G. (Contributor), Cowley, J. W. (Contributor), Crane, J. (Contributor), Cranmer, K. (Contributor), Creager, R. A. (Contributor), Crépé-Renaudin, S. (Contributor), Crescioli, F. (Contributor), Cristinziani, M. (Contributor), Croft, V. (Contributor), Crosetti, G. (Contributor), Cueto, A. (Contributor), Cuhadar Donszelmann, D. T. (Contributor), Cui, H. (Contributor), Cukierman, A. R. (Contributor), Cunningham, W. R. (Contributor), Czekierda, S. (Contributor), Czodrowski, P. (Contributor), Czurylo, M. M. (Contributor), Da Cunha Sargedas De Sousa, C. S. D. S. M. J. (Contributor), Da Fonseca Pinto, F. P. J. V. (Contributor), Da Via, V. C. (Contributor), Dabrowski, W. (Contributor), Dachs, F. (Contributor), Dado, T. (Contributor), Dahbi, S. (Contributor), Dai, T. (Contributor), Dallapiccola, C. (Contributor), Dam, M. (Contributor), D'amen, G. (Contributor), D'Amico, V. (Contributor), Damp, J. (Contributor), Dandoy, J. R. (Contributor), Daneri, M. F. (Contributor), Danninger, M. (Contributor), Dao, V. (Contributor), Darbo, G. (Contributor), Dartsi, O. (Contributor), Dattagupta, A. (Contributor), Daubney, T. (Contributor), D'Auria, S. (Contributor), David, C. (Contributor), Davidek, T. (Contributor), Davis, D. R. (Contributor), Dawson, I. (Contributor), De, K. (Contributor), De Asmundis, A. R. (Contributor), De Beurs, B. M. (Contributor), De Castro, C. S. (Contributor), De Groot, G. N. (Contributor), de Jong, J. P. (Contributor), De la Torre, L. T. H. (Contributor), De Maria, M. A. (Contributor), De Pedis, P. D. (Contributor), De Salvo, S. A. (Contributor), De Sanctis, S. U. (Contributor), De Santis, S. M. (Contributor), De Santo, S. A. (Contributor), De Vivie De Regie, V. D. R. J. B. (Contributor), Debenedetti, C. (Contributor), Dedovich, D. V. (Contributor), Deiana, A. M. (Contributor), Del Peso, P. J. (Contributor), Delabat Diaz, D. Y. (Contributor), Delgove, D. (Contributor), Deliot, F. (Contributor), Delitzsch, C. M. (Contributor), Della Pietra, P. M. (Contributor), Della Volpe, V. D. (Contributor), Dell'Acqua, A. (Contributor), Dell'Asta, L. (Contributor), Delmastro, M. (Contributor), Delporte, C. (Contributor), Delsart, P. A. (Contributor), DeMarco, D. A. (Contributor), Demers, S. (Contributor), Demichev, M. (Contributor), Demontigny, G. (Contributor), Denisov, S. P. (Contributor), D'Eramo, L. (Contributor), Derendarz, D. (Contributor), Derkaoui, J. E. (Contributor), Derue, F. (Contributor), Dervan, P. (Contributor), Desch, K. (Contributor), Dette, K. (Contributor), Deutsch, C. (Contributor), Devesa, M. R. (Contributor), Deviveiros, P. O. (Contributor), Di Bello, B. F. A. (Contributor), Di Ciaccio, C. A. (Contributor), Di Ciaccio, C. L. (Contributor), Di Clemente, C. W. K. (Contributor), Di Donato, D. C. (Contributor), Di Girolamo, G. A. (Contributor), Di Gregorio, G. G. (Contributor), Di Micco, M. B. (Contributor), Di Nardo, N. R. (Contributor), Di Petrillo, P. K. F. (Contributor), Di Sipio, S. R. (Contributor), Diaconu, C. (Contributor), Dias, F. A. (Contributor), Dias Do Vale, D. V. T. (Contributor), Diaz, M. A. (Contributor), Diaz Capriles, C. F. G. (Contributor), Dickinson, J. (Contributor), Didenko, M. (Contributor), Diehl, E. B. (Contributor), Dietrich, J. (Contributor), Díez Cornell, C. S. (Contributor), Diez Pardos, P. C. (Contributor), Dimitrievska, A. (Contributor), Ding, W. (Contributor), Dingfelder, J. (Contributor), Dittmeier, S. J. (Contributor), Dittus, F. (Contributor), Djama, F. (Contributor), Djobava, T. (Contributor), Djuvsland, J. I. (Contributor), Do Vale, V. M. A. B. (Contributor), Dobre, M. (Contributor), Dodsworth, D. (Contributor), Doglioni, C. (Contributor), Dolejsi, J. (Contributor), Dolezal, Z. (Contributor), Donadelli, M. (Contributor), Dong, B. (Contributor), Donini, J. (Contributor), D'onofrio, A. (Contributor), D'Onofrio, M. (Contributor), Dopke, J. (Contributor), Doria, A. (Contributor), Dova, M. T. (Contributor), Doyle, A. T. (Contributor), Drechsler, E. (Contributor), Dreyer, E. (Contributor), Dreyer, T. (Contributor), Drobac, A. S. (Contributor), Du, D. (Contributor), du Pree, P. T. A. (Contributor), Duan, Y. (Contributor), Dubinin, F. (Contributor), Dubovsky, M. (Contributor), Dubreuil, A. (Contributor), Duchovni, E. (Contributor), Duckeck, G. (Contributor), Ducu, O. A. (Contributor), Duda, D. (Contributor), Dudarev, A. (Contributor), Dudder, A. C. (Contributor), Duffield, E. M. (Contributor), D'uffizi, M. (Contributor), Duflot, L. (Contributor), Dührssen, M. (Contributor), Dülsen, C. (Contributor), Dumancic, M. (Contributor), Dumitriu, A. E. (Contributor), Dunford, M. (Contributor), Duperrin, A. (Contributor), Duran Yildiz, Y. H. (Contributor), Düren, M. (Contributor), Durglishvili, A. (Contributor), Duschinger, D. (Contributor), Dutta, B. (Contributor), Duvnjak, D. (Contributor), Dyckes, G. I. (Contributor), Dyndal, M. (Contributor), Dysch, S. (Contributor), Dziedzic, B. S. (Contributor), Eggleston, M. G. (Contributor), Eifert, T. (Contributor), Eigen, G. (Contributor), Einsweiler, K. (Contributor), Ekelof, T. (Contributor), El Jarrari, J. H. (Contributor), Ellajosyula, V. (Contributor), Ellert, M. (Contributor), Ellinghaus, F. (Contributor), Elliot, A. A. (Contributor), Ellis, N. (Contributor), Elmsheuser, J. (Contributor), Elsing, M. (Contributor), Emeliyanov, D. (Contributor), Emerman, A. (Contributor), Enari, Y. (Contributor), Epland, M. B. (Contributor), Erdmann, J. (Contributor), Ereditato, A. (Contributor), Erland, P. A. (Contributor), Errenst, M. (Contributor), Escalier, M. (Contributor), Escobar, C. (Contributor), Estrada Pastor, P. O. (Contributor), Etzion, E. (Contributor), Evans, H. (Contributor), Evans, M. O. (Contributor), Ezhilov, A. (Contributor), Fabbri, F. (Contributor), Fabbri, L. (Contributor), Fabiani, V. (Contributor), Facini, G. (Contributor), Fakhrutdinov, R. M. (Contributor), Falciano, S. (Contributor), Falke, P. J. (Contributor), Falke, S. (Contributor), Faltova, J. (Contributor), Fang, Y. (Contributor), Fanourakis, G. (Contributor), Fanti, M. (Contributor), Faraj, M. (Contributor), Farbin, A. (Contributor), Farilla, A. (Contributor), Farina, E. M. (Contributor), Farooque, T. (Contributor), Farrington, S. M. (Contributor), Farthouat, P. (Contributor), Fassi, F. (Contributor), Fassnacht, P. (Contributor), Fassouliotis, D. (Contributor), Faucci Giannelli, G. M. (Contributor), Fawcett, W. J. (Contributor), Fayard, L. (Contributor), Fedin, O. L. (Contributor), Fedorko, W. (Contributor), Fehr, A. (Contributor), Feickert, M. (Contributor), Feligioni, L. (Contributor), Fell, A. (Contributor), Feng, C. (Contributor), Feng, M. (Contributor), Fenton, M. J. (Contributor), Fenyuk, A. B. (Contributor), Ferguson, S. W. (Contributor), Ferrando, J. (Contributor), Ferrante, A. (Contributor), Ferrari, A. (Contributor), Ferrari, P. (Contributor), Ferrari, R. (Contributor), Ferreira de Lima, D. L. D. E. (Contributor), Ferrer, A. (Contributor), Ferrere, D. (Contributor), Ferretti, C. (Contributor), Fiedler, F. (Contributor), Filipčič, A. (Contributor), Filthaut, F. (Contributor), Finelli, K. D. (Contributor), Fiolhais, M. C. N. (Contributor), Fiorini, L. (Contributor), Fischer, F. (Contributor), Fischer, J. (Contributor), Fisher, W. C. (Contributor), Fitschen, T. (Contributor), Fleck, I. (Contributor), Fleischmann, P. (Contributor), Flick, T. (Contributor), Flierl, B. M. (Contributor), Flores, L. (Contributor), Flores Castillo, C. L. R. (Contributor), Follega, F. M. (Contributor), Fomin, N. (Contributor), Foo, J. H. (Contributor), Forcolin, G. T. (Contributor), Forland, B. C. (Contributor), Formica, A. (Contributor), Förster, F. A. (Contributor), Forti, A. C. (Contributor), Fortin, E. (Contributor), Foti, M. G. (Contributor), Fournier, D. (Contributor), Fox, H. (Contributor), Francavilla, P. (Contributor), Francescato, S. (Contributor), Franchini, M. (Contributor), Franchino, S. (Contributor), Francis, D. (Contributor), Franco, L. (Contributor), Franconi, L. (Contributor), Franklin, M. (Contributor), Frattari, G. (Contributor), Fray, A. N. (Contributor), Freeman, P. M. (Contributor), Freund, B. (Contributor), Freund, W. S. (Contributor), Freundlich, E. M. (Contributor), Frizzell, D. C. (Contributor), Froidevaux, D. (Contributor), Frost, J. A. (Contributor), Fujimoto, M. (Contributor), Fukunaga, C. (Contributor), Fullana Torregrosa, T. E. (Contributor), Fusayasu, T. (Contributor), Fuster, J. (Contributor), Gabrielli, A. (Contributor), Gadatsch, S. (Contributor), Gadow, P. (Contributor), Gagliardi, G. (Contributor), Gagnon, L. G. (Contributor), Gallardo, G. E. (Contributor), Gallas, E. J. (Contributor), Gallop, B. J. (Contributor), Gamboa Goni, G. R. (Contributor), Gan, K. K. (Contributor), Ganguly, S. (Contributor), Gao, J. (Contributor), Gao, Y. (Contributor), Gao, Y. S. (Contributor), Garay Walls, W. F. M. (Contributor), García, C. (Contributor), García Navarro, N. J. E. (Contributor), García Pascual, P. J. A. (Contributor), Garcia-Argos, C. (Contributor), Garcia-Sciveres, M. (Contributor), Gardner, R. W. (Contributor), Garelli, N. (Contributor), Gargiulo, S. (Contributor), Garner, C. A. (Contributor), Garonne, V. (Contributor), Gasiorowski, S. J. (Contributor), Gaspar, P. (Contributor), Gaudiello, A. (Contributor), Gaudio, G. (Contributor), Gavrilenko, I. L. (Contributor), Gavrilyuk, A. (Contributor), Gay, C. (Contributor), Gaycken, G. (Contributor), Gazis, E. N. (Contributor), Geanta, A. A. (Contributor), Gee, C. M. (Contributor), Gee, C. N. P. (Contributor), Geisen, J. (Contributor), Geisen, M. (Contributor), Gemme, C. (Contributor), Genest, M. H. (Contributor), Geng, C. (Contributor), Gentile, S. (Contributor), George, S. (Contributor), Geralis, T. (Contributor), Gerlach, L. O. (Contributor), Gessinger-Befurt, P. (Contributor), Gessner, G. (Contributor), Ghasemi, S. (Contributor), Ghasemi Bostanabad, B. M. (Contributor), Ghneimat, M. (Contributor), Ghosh, A. (Contributor), Giacobbe, B. (Contributor), Giagu, S. (Contributor), Giangiacomi, N. (Contributor), Giannetti, P. (Contributor), Giannini, A. (Contributor), Giannini, G. (Contributor), Gibson, S. M. (Contributor), Gignac, M. (Contributor), Gil, D. T. (Contributor), Gilbert, B. J. (Contributor), Gillberg, D. (Contributor), Gilles, G. (Contributor), Gingrich, D. M. (Contributor), Giordani, M. P. (Contributor), Giraud, P. F. (Contributor), Giugliarelli, G. (Contributor), Giugni, D. (Contributor), Giuli, F. (Contributor), Gkaitatzis, S. (Contributor), Gkialas, I. (Contributor), Gkougkousis, E. L. (Contributor), Gkountoumis, P. (Contributor), Gladilin, L. K. (Contributor), Glasman, C. (Contributor), Glatzer, J. (Contributor), Glaysher, P. C. F. (Contributor), Glazov, A. (Contributor), Gledhill, G. R. (Contributor), Gnesi, I. (Contributor), Goblirsch-Kolb, M. (Contributor), Godin, D. (Contributor), Goldfarb, S. (Contributor), Golling, T. (Contributor), Golubkov, D. (Contributor), Gomes, A. (Contributor), Goncalves Gama, G. R. (Contributor), Gonçalo, R. (Contributor), Gonella, G. (Contributor), Gonella, L. (Contributor), Gongadze, A. (Contributor), Gonnella, F. (Contributor), Gonski, J. L. (Contributor), González de la Hoz, D. L. H. S. (Contributor), Gonzalez Fernandez, F. S. (Contributor), Gonzalez Lopez, L. R. (Contributor), Gonzalez Renteria, R. C. (Contributor), Gonzalez Suarez, S. R. (Contributor), Gonzalez-Sevilla, S. (Contributor), Gonzalvo Rodriguez, R. G. R. (Contributor), Goossens, L. (Contributor), Gorasia, N. A. (Contributor), Gorbounov, P. A. (Contributor), Gordon, H. A. (Contributor), Gorini, B. (Contributor), Gorini, E. (Contributor), Gorišek, A. (Contributor), Goshaw, A. T. (Contributor), Gostkin, M. I. (Contributor), Gottardo, C. A. (Contributor), Gouighri, M. (Contributor), Goussiou, A. G. (Contributor), Govender, N. (Contributor), Goy, C. (Contributor), Grabowska-Bold, I. (Contributor), Graham, E. C. (Contributor), Gramling, J. (Contributor), Gramstad, E. (Contributor), Grancagnolo, S. (Contributor), Grandi, M. (Contributor), Gratchev, V. (Contributor), Gravila, P. M. (Contributor), Gravili, F. G. (Contributor), Gray, C. (Contributor), Gray, H. M. (Contributor), Grefe, C. (Contributor), Gregersen, K. (Contributor), Gregor, I. M. (Contributor), Grenier, P. (Contributor), Grevtsov, K. (Contributor), Grieco, C. (Contributor), Grieser, N. A. (Contributor), Grillo, A. A. (Contributor), Grimm, K. (Contributor), Grinstein, S. (Contributor), Grivaz, J. (Contributor), Groh, S. (Contributor), Gross, E. (Contributor), Grosse-Knetter, J. (Contributor), Grout, Z. J. (Contributor), Grud, C. (Contributor), Grummer, A. (Contributor), Grundy, J. C. (Contributor), Guan, L. (Contributor), Guan, W. (Contributor), Gubbels, C. (Contributor), Guenther, J. (Contributor), Guerguichon, A. (Contributor), Guerrero Rojas, R. J. G. R. (Contributor), Guescini, F. (Contributor), Guest, D. (Contributor), Gugel, R. (Contributor), Guida, A. (Contributor), Guillemin, T. (Contributor), Guindon, S. (Contributor), Gul, U. (Contributor), Guo, J. (Contributor), Guo, W. (Contributor), Guo, Y. (Contributor), Guo, Z. (Contributor), Gupta, R. (Contributor), Gurbuz, S. (Contributor), Gustavino, G. (Contributor), Guth, M. (Contributor), Gutierrez, P. (Contributor), Gutschow, C. (Contributor), Guyot, C. (Contributor), Gwenlan, C. (Contributor), Gwilliam, C. B. (Contributor), Haaland, E. S. (Contributor), Haas, A. (Contributor), Haber, C. (Contributor), Hadavand, H. K. (Contributor), Hadef, A. (Contributor), Haleem, M. (Contributor), Haley, J. (Contributor), Hall, J. J. (Contributor), Halladjian, G. (Contributor), Hallewell, G. D. (Contributor), Hamano, K. (Contributor), Hamdaoui, H. (Contributor), Hamer, M. (Contributor), Hamity, G. N. (Contributor), Han, K. (Contributor), Han, L. (Contributor), Han, S. (Contributor), Han, Y. F. (Contributor), Hanagaki, K. (Contributor), Hance, M. (Contributor), Handl, D. M. (Contributor), Hank, M. D. (Contributor), Hankache, R. (Contributor), Hansen, E. (Contributor), Hansen, J. B. (Contributor), Hansen, J. D. (Contributor), Hansen, M. C. (Contributor), Hansen, P. H. (Contributor), Hanson, E. C. (Contributor), Hara, K. (Contributor), Harenberg, T. (Contributor), Harkusha, S. (Contributor), Harrison, P. F. (Contributor), Hartman, N. M. (Contributor), Hartmann, N. M. (Contributor), Hasegawa, Y. (Contributor), Hasib, A. (Contributor), Hassani, S. (Contributor), Haug, S. (Contributor), Hauser, R. (Contributor), Havener, L. B. (Contributor), Havranek, M. (Contributor), Hawkes, C. M. (Contributor), Hawkings, R. J. (Contributor), Hayashida, S. (Contributor), Hayden, D. (Contributor), Hayes, C. (Contributor), Hayes, R. L. (Contributor), Hays, C. P. (Contributor), Hays, J. M. (Contributor), Hayward, H. S. (Contributor), Haywood, S. J. (Contributor), He, F. (Contributor), He, Y. (Contributor), Heath, M. P. (Contributor), Hedberg, V. (Contributor), Heer, S. (Contributor), Heggelund, A. L. (Contributor), Heidegger, C. (Contributor), Heidegger, K. K. (Contributor), Heidorn, W. D. (Contributor), Heilman, J. (Contributor), Heim, S. (Contributor), Heim, T. (Contributor), Heinemann, B. (Contributor), Heinlein, J. G. (Contributor), Heinrich, J. J. (Contributor), Heinrich, L. (Contributor), Hejbal, J. (Contributor), Helary, L. (Contributor), Held, A. (Contributor), Hellesund, S. (Contributor), Helling, C. M. (Contributor), Hellman, S. (Contributor), Helsens, C. (Contributor), Henderson, R. C. W. (Contributor), Heng, Y. (Contributor), Henkelmann, L. (Contributor), Henriques Correia, C. A. M. (Contributor), Herde, H. (Contributor), Hernández Jiménez, J. Y. (Contributor), Herr, H. (Contributor), Herrmann, M. G. (Contributor), Herrmann, T. (Contributor), Herten, G. (Contributor), Hertenberger, R. (Contributor), Hervas, L. (Contributor), Herwig, T. C. (Contributor), Hesketh, G. G. (Contributor), Hessey, N. P. (Contributor), Hibi, H. (Contributor), Higashida, A. (Contributor), Higashino, S. (Contributor), Higón-Rodriguez, E. (Contributor), Hildebrand, K. (Contributor), Hill, J. C. (Contributor), Hill, K. K. (Contributor), Hiller, K. H. (Contributor), Hillier, S. J. (Contributor), Hils, M. (Contributor), Hinchliffe, I. (Contributor), Hinterkeuser, F. (Contributor), Hirose, M. (Contributor), Hirose, S. (Contributor), Hirschbuehl, D. (Contributor), Hiti, B. (Contributor), Hladik, O. (Contributor), Hlaluku, D. R. (Contributor), Hobbs, J. (Contributor), Hod, N. (Contributor), Hodgkinson, M. C. (Contributor), Hoecker, A. (Contributor), Hohn, D. (Contributor), Hohov, D. (Contributor), Holm, T. (Contributor), Holmes, T. R. (Contributor), Holzbock, M. (Contributor), Hommels, L. B. A. H. (Contributor), Hong, T. M. (Contributor), Honig, J. C. (Contributor), Hönle, A. (Contributor), Hooberman, B. H. (Contributor), Hopkins, W. H. (Contributor), Horii, Y. (Contributor), Horn, P. (Contributor), Horyn, L. A. (Contributor), Hou, S. (Contributor), Hoummada, A. (Contributor), Howarth, J. (Contributor), Hoya, J. (Contributor), Hrabovsky, M. (Contributor), Hrdinka, J. (Contributor), Hrivnac, J. (Contributor), Hrynevich, A. (Contributor), Hryn'ova, T. (Contributor), Hsu, P. J. (Contributor), Hsu, S. (Contributor), Hu, Q. (Contributor), Hu, S. (Contributor), Hu, Y. F. (Contributor), Huang, D. P. (Contributor), Huang, Y. (Contributor), Hubacek, Z. (Contributor), Hubaut, F. (Contributor), Huebner, M. (Contributor), Huegging, F. (Contributor), Huffman, T. B. (Contributor), Huhtinen, M. (Contributor), Hulsken, R. (Contributor), Hunter, R. F. H. (Contributor), Huo, P. (Contributor), Huseynov, N. (Contributor), Huston, J. (Contributor), Huth, J. (Contributor), Hyneman, R. (Contributor), Hyrych, S. (Contributor), Iacobucci, G. (Contributor), Iakovidis, G. (Contributor), Ibragimov, I. (Contributor), Iconomidou-Fayard, L. (Contributor), Iengo, P. (Contributor), Ignazzi, R. (Contributor), Igonkina, O. (Contributor), Iguchi, R. (Contributor), Iizawa, T. (Contributor), Ikegami, Y. (Contributor), Ikeno, M. (Contributor), Ilic, N. (Contributor), Iltzsche, F. (Contributor), Imam, H. (Contributor), Introzzi, G. (Contributor), Iodice, M. (Contributor), Iordanidou, K. (Contributor), Ippolito, V. (Contributor), Isacson, M. F. (Contributor), Ishino, M. (Contributor), Islam, W. (Contributor), Issever, C. (Contributor), Istin, S. (Contributor), Ito, F. (Contributor), Iturbe Ponce, P. J. M. (Contributor), Iuppa, R. (Contributor), Ivina, A. (Contributor), Iwasaki, H. (Contributor), Izen, J. M. (Contributor), Izzo, V. (Contributor), Jacka, P. (Contributor), Jackson, P. (Contributor), Jacobs, R. M. (Contributor), Jaeger, B. P. (Contributor), Jain, V. (Contributor), Jäkel, G. (Contributor), Jakobi, K. B. (Contributor), Jakobs, K. (Contributor), Jakoubek, T. (Contributor), Jamieson, J. (Contributor), Janas, K. W. (Contributor), Jansky, R. (Contributor), Janus, M. (Contributor), Janus, P. A. (Contributor), Jarlskog, G. (Contributor), Jaspan, A. E. (Contributor), Javadov, N. (Contributor), Javůrek, T. (Contributor), Javurkova, M. (Contributor), Jeanneau, F. (Contributor), Jeanty, L. (Contributor), Jejelava, J. (Contributor), Jenni, P. (Contributor), Jeong, N. (Contributor), Jézéquel, S. (Contributor), Ji, H. (Contributor), Jia, J. (Contributor), Jiang, H. (Contributor), Jiang, Y. (Contributor), Jiang, Z. (Contributor), Jiggins, S. (Contributor), Jimenez Morales, M. F. A. (Contributor), Jimenez Pena, P. J. (Contributor), Jin, S. (Contributor), Jinaru, A. (Contributor), Jinnouchi, O. (Contributor), Jivan, H. (Contributor), Johansson, P. (Contributor), Johns, K. A. (Contributor), Johnson, C. A. (Contributor), Jones, R. W. L. (Contributor), Jones, S. D. (Contributor), Jones, T. J. (Contributor), Jongmanns, J. (Contributor), Jovicevic, J. (Contributor), Ju, X. (Contributor), Junggeburth, J. J. (Contributor), Juste Rozas, R. A. (Contributor), Kaczmarska, A. (Contributor), Kado, M. (Contributor), Kagan, H. (Contributor), Kagan, M. (Contributor), Kahn, A. (Contributor), Kahra, C. (Contributor), Kaji, T. (Contributor), Kajomovitz, E. (Contributor), Kalderon, C. W. (Contributor), Kaluza, A. (Contributor), Kamenshchikov, A. (Contributor), Kaneda, M. (Contributor), Kang, N. J. (Contributor), Kang, S. (Contributor), Kano, Y. (Contributor), Kanzaki, J. (Contributor), Kaplan, L. S. (Contributor), Kar, D. (Contributor), Karava, K. (Contributor), Kareem, M. J. (Contributor), Karkanias, I. (Contributor), Karpov, S. N. (Contributor), Karpova, Z. M. (Contributor), Kartvelishvili, V. (Contributor), Karyukhin, A. N. (Contributor), Kasimi, E. (Contributor), Kastanas, A. (Contributor), Kato, C. (Contributor), Katzy, J. (Contributor), Kawade, K. (Contributor), Kawagoe, K. (Contributor), Kawaguchi, T. (Contributor), Kawamoto, T. (Contributor), Kawamura, G. (Contributor), Kay, E. F. (Contributor), Kazakos, S. (Contributor), Kazanin, V. F. (Contributor), Keeler, R. (Contributor), Kehoe, R. (Contributor), Keller, J. S. (Contributor), Kellermann, E. (Contributor), Kelsey, D. (Contributor), Kempster, J. J. (Contributor), Kendrick, J. (Contributor), Kennedy, K. E. (Contributor), Kepka, O. (Contributor), Kersten, S. (Contributor), Kerševan, B. P. (Contributor), Ketabchi Haghighat, H. S. (Contributor), Khader, M. (Contributor), Khalil-Zada, F. (Contributor), Khandoga, M. (Contributor), Khanov, A. (Contributor), Kharlamov, A. G. (Contributor), Kharlamova, T. (Contributor), Khoda, E. E. (Contributor), Khodinov, A. (Contributor), Khoo, T. J. (Contributor), Khoriauli, G. (Contributor), Khramov, E. (Contributor), Khubua, J. (Contributor), Kido, S. (Contributor), Kiehn, M. (Contributor), Kilby, C. R. (Contributor), Kim, E. (Contributor), Kim, Y. K. (Contributor), Kimura, N. (Contributor), Kirchhoff, A. (Contributor), Kirchmeier, D. (Contributor), Kirk, J. (Contributor), Kiryunin, A. E. (Contributor), Kishimoto, T. (Contributor), Kisliuk, D. P. (Contributor), Kitali, V. (Contributor), Kitsaki, C. (Contributor), Kivernyk, O. (Contributor), Klapdor-Kleingrothaus, T. (Contributor), Klassen, M. (Contributor), Klein, C. (Contributor), Klein, M. H. (Contributor), Klein, M. (Contributor), Klein, U. (Contributor), Kleinknecht, K. (Contributor), Klimek, P. (Contributor), Klimentov, A. (Contributor), Klingl, T. (Contributor), Klioutchnikova, T. (Contributor), Klitzner, F. F. (Contributor), Kluit, P. (Contributor), Kluth, S. (Contributor), Kneringer, E. (Contributor), Knoops, E. B. F. G. (Contributor), Knue, A. (Contributor), Kobayashi, D. (Contributor), Kobel, M. (Contributor), Kocian, M. (Contributor), Kodama, T. (Contributor), Kodys, P. (Contributor), Koeck, D. M. (Contributor), Koenig, P. T. (Contributor), Koffas, T. (Contributor), Köhler, N. M. (Contributor), Kolb, M. (Contributor), Koletsou, I. (Contributor), Komarek, T. (Contributor), Kondo, T. (Contributor), Köneke, K. (Contributor), Kong, A. X. Y. (Contributor), König, A. C. (Contributor), Kono, T. (Contributor), Konstantinides, V. (Contributor), Konstantinidis, N. (Contributor), Konya, B. (Contributor), Kopeliansky, R. (Contributor), Koperny, S. (Contributor), Korcyl, K. (Contributor), Kordas, K. (Contributor), Koren, G. (Contributor), Korn, A. (Contributor), Korolkov, I. (Contributor), Korolkova, E. V. (Contributor), Korotkova, N. (Contributor), Kortner, O. (Contributor), Kortner, S. (Contributor), Kostyukhin, V. V. (Contributor), Kotsokechagia, A. (Contributor), Kotwal, A. (Contributor), Koulouris, A. (Contributor), Kourkoumeli-Charalampidi, A. (Contributor), Kourkoumelis, C. (Contributor), Kourlitis, E. (Contributor), Kouskoura, V. (Contributor), Kowalewski, R. (Contributor), Kozanecki, W. (Contributor), Kozhin, A. S. (Contributor), Kramarenko, V. A. (Contributor), Kramberger, G. (Contributor), Krasnopevtsev, D. (Contributor), Krasny, M. W. (Contributor), Krasznahorkay, A. (Contributor), Krauss, D. (Contributor), Kremer, J. A. (Contributor), Kretzschmar, J. (Contributor), Krieger, P. (Contributor), Krieter, F. (Contributor), Krishnan, A. (Contributor), Krivos, M. (Contributor), Krizka, K. (Contributor), Kroeninger, K. (Contributor), Kroha, H. (Contributor), Kroll, J. (Contributor), Krowpman, K. S. (Contributor), Kruchonak, U. (Contributor), Krüger, H. (Contributor), Krumnack, N. (Contributor), Kruse, M. C. (Contributor), Krzysiak, J. A. (Contributor), Kubota, A. (Contributor), Kuchinskaia, O. (Contributor), Kuday, S. (Contributor), Kuechler, J. T. (Contributor), Kuehn, S. (Contributor), Kuhl, T. (Contributor), Kukhtin, V. (Contributor), Kulchitsky, Y. (Contributor), Kuleshov, S. (Contributor), Kulinich, Y. P. (Contributor), Kuna, M. (Contributor), Kunigo, T. (Contributor), Kupco, A. (Contributor), Kupfer, T. (Contributor), Kuprash, O. (Contributor), Kurashige, H. (Contributor), Kurchaninov, L. L. (Contributor), Kurochkin, Y. A. (Contributor), Kurova, A. (Contributor), Kurth, M. G. (Contributor), Kuwertz, E. S. (Contributor), Kuze, M. (Contributor), Kvam, A. K. (Contributor), Kvita, J. (Contributor), Kwan, T. (Contributor), La Ruffa, R. F. (Contributor), Lacasta, C. (Contributor), Lacava, F. (Contributor), Lack, D. P. J. (Contributor), Lacker, H. (Contributor), Lacour, D. (Contributor), Ladygin, E. (Contributor), Lafaye, R. (Contributor), Laforge, B. (Contributor), Lagouri, T. (Contributor), Lai, S. (Contributor), Lakomiec, I. K. (Contributor), Lambert, J. E. (Contributor), Lammers, S. (Contributor), Lampl, W. (Contributor), Lampoudis, C. (Contributor), Lançon, E. (Contributor), Landgraf, U. (Contributor), Landon, M. P. J. (Contributor), Lanfermann, M. C. (Contributor), Lang, V. S. (Contributor), Lange, J. C. (Contributor), Langenberg, R. J. (Contributor), Lankford, A. J. (Contributor), Lanni, F. (Contributor), Lantzsch, K. (Contributor), Lanza, A. (Contributor), Lapertosa, A. (Contributor), Laporte, J. F. (Contributor), Lari, T. (Contributor), Lasagni Manghi, M. F. (Contributor), Lassnig, M. (Contributor), Lau, T. S. (Contributor), Laudrain, A. (Contributor), Laurier, A. (Contributor), Lavorgna, M. (Contributor), Lawlor, S. D. (Contributor), Lazzaroni, M. (Contributor), Le, B. (Contributor), Le Guirriec, G. E. (Contributor), Lebedev, A. (Contributor), LeBlanc, M. (Contributor), LeCompte, T. (Contributor), Ledroit-Guillon, F. (Contributor), Lee, A. C. A. (Contributor), Lee, C. A. (Contributor), Lee, G. R. (Contributor), Lee, L. (Contributor), Lee, S. C. (Contributor), Lee, S. (Contributor), Lefebvre, B. (Contributor), Lefebvre, H. P. (Contributor), Lefebvre, M. (Contributor), Leggett, C. (Contributor), Lehmann, K. (Contributor), Lehmann, N. (Contributor), Lehmann Miotto, M. G. (Contributor), Leight, W. A. (Contributor), Leisos, A. (Contributor), Leite, M. A. L. (Contributor), Leitgeb, C. E. (Contributor), Leitner, R. (Contributor), Lellouch, D. (Contributor), Leney, K. J. C. (Contributor), Lenz, T. (Contributor), Leone, S. (Contributor), Leonidopoulos, C. (Contributor), Leopold, A. (Contributor), Leroy, C. (Contributor), Les, R. (Contributor), Lester, C. G. (Contributor), Levchenko, M. (Contributor), Levêque, J. (Contributor), Levin, D. (Contributor), Levinson, L. J. (Contributor), Lewis, D. J. (Contributor), Li, B. (Contributor), Li, C. (Contributor), Li, F. (Contributor), Li, H. (Contributor), Li, J. (Contributor), Li, K. (Contributor), Li, L. (Contributor), Li, M. (Contributor), Li, Q. (Contributor), Li, Q. Y. (Contributor), Li, S. (Contributor), Li, X. (Contributor), Li, Y. (Contributor), Li, Z. (Contributor), Liang, Z. (Contributor), Liberatore, M. (Contributor), Liberti, B. (Contributor), Liblong, A. (Contributor), Lie, K. (Contributor), Lim, S. (Contributor), Lin, C. Y. (Contributor), Lin, K. (Contributor), Linck, R. A. (Contributor), Lindley, R. E. (Contributor), Lindon, J. H. (Contributor), Linss, A. (Contributor), Lionti, A. L. (Contributor), Lipeles, E. (Contributor), Lipniacka, A. (Contributor), Liss, T. M. (Contributor), Lister, A. (Contributor), Little, J. D. (Contributor), Liu, B. (Contributor), Liu, B. L. (Contributor), Liu, H. B. (Contributor), Liu, J. B. (Contributor), Liu, J. K. K. (Contributor), Liu, K. (Contributor), Liu, M. (Contributor), Liu, P. (Contributor), Liu, X. (Contributor), Liu, Y. (Contributor), Liu, Y. L. (Contributor), Liu, Y. W. (Contributor), Livan, M. (Contributor), Lleres, A. (Contributor), Llorente Merino, M. J. (Contributor), Lloyd, S. L. (Contributor), Lo, C. Y. (Contributor), Lobodzinska, E. M. (Contributor), Loch, P. (Contributor), Loffredo, S. (Contributor), Lohse, T. (Contributor), Lohwasser, K. (Contributor), Lokajicek, M. (Contributor), Long, J. D. (Contributor), Long, R. E. (Contributor), Longarini, I. (Contributor), Longo, L. (Contributor), Looper, K. A. (Contributor), Lopez Paz, P. I. (Contributor), Lopez Solis, S. A. (Contributor), Lorenz, J. (Contributor), Lorenzo Martinez, M. N. (Contributor), Lory, A. M. (Contributor), Lösel, P. J. (Contributor), Lösle, A. (Contributor), Lou, X. (Contributor), Lounis, A. (Contributor), Love, J. (Contributor), Love, P. A. (Contributor), Lozano Bahilo, B. J. J. (Contributor), Lu, M. (Contributor), Lu, Y. J. (Contributor), Lubatti, H. J. (Contributor), Luci, C. (Contributor), Lucio Alves, A. F. L. (Contributor), Lucotte, A. (Contributor), Luehring, F. (Contributor), Luise, I. (Contributor), Luminari, L. (Contributor), Lund-Jensen, B. (Contributor), Lutz, M. S. (Contributor), Lynn, D. (Contributor), Lyons, H. (Contributor), Lysak, R. (Contributor), Lytken, E. (Contributor), Lyu, F. (Contributor), Lyubushkin, V. (Contributor), Lyubushkina, T. (Contributor), Ma, H. (Contributor), Ma, L. L. (Contributor), Ma, Y. (Contributor), Mac Donell, D. D. M. (Contributor), Maccarrone, G. (Contributor), Macchiolo, A. (Contributor), Macdonald, C. M. (Contributor), MacDonald, J. C. (Contributor), Machado Miguens, M. J. (Contributor), Madaffari, D. (Contributor), Madar, R. (Contributor), Mader, W. F. (Contributor), Madugoda Ralalage Don, R. D. M. (Contributor), Madysa, N. (Contributor), Maeda, J. (Contributor), Maeno, T. (Contributor), Maerker, M. (Contributor), Magerl, V. (Contributor), Magini, N. (Contributor), Magro, J. (Contributor), Mahon, D. J. (Contributor), Maidantchik, C. (Contributor), Maier, T. (Contributor), Maio, A. (Contributor), Maj, K. (Contributor), Majersky, O. (Contributor), Majewski, S. (Contributor), Makida, Y. (Contributor), Makovec, N. (Contributor), Malaescu, B. (Contributor), Malecki, P. (Contributor), Maleev, V. P. (Contributor), Malek, F. (Contributor), Malito, D. (Contributor), Mallik, U. (Contributor), Malon, D. (Contributor), Malone, C. (Contributor), Maltezos, S. (Contributor), Malyukov, S. (Contributor), Mamuzic, J. (Contributor), Mancini, G. (Contributor), Mandić, I. (Contributor), Manhaes de Andrade Filho, D. A. F. L. (Contributor), Maniatis, I. M. (Contributor), Manjarres Ramos, R. J. (Contributor), Mankinen, K. H. (Contributor), Mann, A. (Contributor), Manousos, A. (Contributor), Mansoulie, B. (Contributor), Manthos, I. (Contributor), Manzoni, S. (Contributor), Marantis, A. (Contributor), Marceca, G. (Contributor), Marchese, L. (Contributor), Marchiori, G. (Contributor), Marcisovsky, M. (Contributor), Marcoccia, L. (Contributor), Marcon, C. (Contributor), Marin Tobon, T. C. A. (Contributor), Marjanovic, M. (Contributor), Marshall, Z. (Contributor), Martensson, M. U. F. (Contributor), Marti-Garcia, S. (Contributor), Martin, C. B. (Contributor), Martin, T. A. (Contributor), Martin, V. J. (Contributor), Martin dit Latour, D. L. B. (Contributor), Martinelli, L. (Contributor), Martinez, M. (Contributor), Martinez Agullo, A. P. (Contributor), Martinez Outschoorn, O. V. I. (Contributor), Martin-Haugh, S. (Contributor), Martoiu, V. S. (Contributor), Martyniuk, A. C. (Contributor), Marzin, A. (Contributor), Maschek, S. R. (Contributor), Masetti, L. (Contributor), Mashimo, T. (Contributor), Mashinistov, R. (Contributor), Masik, J. (Contributor), Maslennikov, A. L. (Contributor), Massa, L. (Contributor), Massarotti, P. (Contributor), Mastrandrea, P. (Contributor), Mastroberardino, A. (Contributor), Masubuchi, T. (Contributor), Matakias, D. (Contributor), Matic, A. (Contributor), Matsuzawa, N. (Contributor), Mättig, P. (Contributor), Maurer, J. (Contributor), Maček, B. (Contributor), Maximov, D. A. (Contributor), Mazini, R. (Contributor), Maznas, I. (Contributor), Mazza, S. M. (Contributor), Mc Gowan, G. J. P. (Contributor), Mc Kee, K. S. P. (Contributor), McCarthy, T. G. (Contributor), McCormack, W. P. (Contributor), McDonald, E. F. (Contributor), Mcfayden, J. A. (Contributor), Mchedlidze, G. (Contributor), McKay, M. A. (Contributor), McLean, K. D. (Contributor), McMahon, S. J. (Contributor), McNamara, P. C. (Contributor), McNicol, C. J. (Contributor), McPherson, R. A. (Contributor), Mdhluli, J. E. (Contributor), Meadows, Z. A. (Contributor), Meehan, S. (Contributor), Megy, T. (Contributor), Mehlhase, S. (Contributor), Mehta, A. (Contributor), Meirose, B. (Contributor), Melini, D. (Contributor), Mellado Garcia, G. B. R. (Contributor), Mellenthin, J. D. (Contributor), Melo, M. (Contributor), Meloni, F. (Contributor), Melzer, A. (Contributor), Mendes Gouveia, G. E. D. (Contributor), Meng, L. (Contributor), Meng, X. T. (Contributor), Menke, S. (Contributor), Meoni, E. (Contributor), Mergelmeyer, S. (Contributor), Merkt, S. A. M. (Contributor), Merlassino, C. (Contributor), Mermod, P. (Contributor), Merola, L. (Contributor), Meroni, C. (Contributor), Merz, G. (Contributor), Meshkov, O. (Contributor), Meshreki, J. K. R. (Contributor), Metcalfe, J. (Contributor), Mete, A. S. (Contributor), Meyer, C. (Contributor), Meyer, J. (Contributor), Michetti, M. (Contributor), Middleton, R. P. (Contributor), Mijović, L. (Contributor), Mikenberg, G. (Contributor), Mikestikova, M. (Contributor), Mikuž, M. (Contributor), Mildner, H. (Contributor), Milic, A. (Contributor), Milke, C. D. (Contributor), Miller, D. W. (Contributor), Milov, A. (Contributor), Milstead, D. A. (Contributor), Mina, R. A. (Contributor), Minaenko, A. A. (Contributor), Minashvili, I. A. (Contributor), Mincer, A. I. (Contributor), Mindur, B. (Contributor), Mineev, M. (Contributor), Minegishi, Y. (Contributor), Mir, L. M. (Contributor), Mironova, M. (Contributor), Mirto, A. (Contributor), Mistry, K. P. (Contributor), Mitani, T. (Contributor), Mitrevski, J. (Contributor), Mitsou, V. A. (Contributor), Mittal, M. (Contributor), Miu, O. (Contributor), Miucci, A. (Contributor), Miyagawa, P. S. (Contributor), Mizukami, A. (Contributor), Mjörnmark, J. U. (Contributor), Mkrtchyan, T. (Contributor), Mlynarikova, M. (Contributor), Moa, T. (Contributor), Mobius, S. (Contributor), Mochizuki, K. (Contributor), Mogg, P. (Contributor), Mohapatra, S. (Contributor), Moles-Valls, R. (Contributor), Mönig, K. (Contributor), Monnier, E. (Contributor), Montalbano, A. (Contributor), Montejo Berlingen, B. J. (Contributor), Montella, M. (Contributor), Monticelli, F. (Contributor), Monzani, S. (Contributor), Morange, N. (Contributor), Moreira De Carvalho, D. C. A. L. (Contributor), Moreno, D. (Contributor), Moreno Llácer, L. M. (Contributor), Moreno Martinez, M. C. (Contributor), Morettini, P. (Contributor), Morgenstern, M. (Contributor), Morgenstern, S. (Contributor), Mori, D. (Contributor), Morii, M. (Contributor), Morinaga, M. (Contributor), Morisbak, V. (Contributor), Morley, A. K. (Contributor), Mornacchi, G. (Contributor), Morris, A. P. (Contributor), Morvaj, L. (Contributor), Moschovakos, P. (Contributor), Moser, B. (Contributor), Mosidze, M. (Contributor), Moskalets, T. (Contributor), Moss, J. (Contributor), Moyse, E. J. W. (Contributor), Muanza, S. (Contributor), Mueller, J. (Contributor), Mueller, R. S. P. (Contributor), Muenstermann, D. (Contributor), Mullier, G. A. (Contributor), Mungo, D. P. (Contributor), Munoz Martinez, M. J. L. (Contributor), Munoz Sanchez, S. F. J. (Contributor), Murin, P. (Contributor), Murray, W. J. (Contributor), Murrone, A. (Contributor), Muse, J. M. (Contributor), Muškinja, M. (Contributor), Mwewa, C. (Contributor), Myagkov, A. G. (Contributor), Myers, A. A. (Contributor), Myers, G. (Contributor), Myers, J. (Contributor), Myska, M. (Contributor), Nachman, B. P. (Contributor), Nackenhorst, O. (Contributor), Nag Nag, N. A. (Contributor), Nagai, K. (Contributor), Nagano, K. (Contributor), Nagasaka, Y. (Contributor), Nagle, J. L. (Contributor), Nagy, E. (Contributor), Nairz, A. M. (Contributor), Nakahama, Y. (Contributor), Nakamura, K. (Contributor), Nakamura, T. (Contributor), Nanjo, H. (Contributor), Napolitano, F. (Contributor), Naranjo Garcia, G. R. F. (Contributor), Narayan, R. (Contributor), Naryshkin, I. (Contributor), Naumann, T. (Contributor), Navarro, G. (Contributor), Nechaeva, P. Y. (Contributor), Nechansky, F. (Contributor), Neep, T. J. (Contributor), Negri, A. (Contributor), Negrini, M. (Contributor), Nellist, C. (Contributor), Nelson, C. (Contributor), Nelson, M. E. (Contributor), Nemecek, S. (Contributor), Nessi, M. (Contributor), Neubauer, M. S. (Contributor), Neuhaus, F. (Contributor), Neumann, M. (Contributor), Newhouse, R. (Contributor), Newman, P. R. (Contributor), Ng, C. W. (Contributor), Ng, Y. S. (Contributor), Ng, Y. W. Y. (Contributor), Ngair, B. (Contributor), Nguyen, H. D. N. (Contributor), Nguyen Manh, M. T. (Contributor), Nibigira, E. (Contributor), Nickerson, R. B. (Contributor), Nicolaidou, R. (Contributor), Nielsen, D. S. (Contributor), Nielsen, J. (Contributor), Niemeyer, M. (Contributor), Nikiforou, N. (Contributor), Nikolaenko, V. (Contributor), Nikolic-Audit, I. (Contributor), Nikolopoulos, K. (Contributor), Nilsson, P. (Contributor), Nindhito, H. R. (Contributor), Ninomiya, Y. (Contributor), Nisati, A. (Contributor), Nishu, N. (Contributor), Nisius, R. (Contributor), Nitsche, I. (Contributor), Nitta, T. (Contributor), Nobe, T. (Contributor), Noel, D. L. (Contributor), Noguchi, Y. (Contributor), Nomidis, I. (Contributor), Nomura, M. A. (Contributor), Nordberg, M. (Contributor), Novak, J. (Contributor), Novak, T. (Contributor), Novgorodova, O. (Contributor), Novotny, R. (Contributor), Nozka, L. (Contributor), Ntekas, K. (Contributor), Nurse, E. (Contributor), Oakham, F. G. (Contributor), Oberlack, H. (Contributor), Ocariz, J. (Contributor), Ochi, A. (Contributor), Ochoa, I. (Contributor), Ochoa-Ricoux, J. P. (Contributor), O'Connor, K. (Contributor), Oda, S. (Contributor), Odaka, S. (Contributor), Oerdek, S. (Contributor), Ogrodnik, A. (Contributor), Oh, A. (Contributor), Ohm, C. C. (Contributor), Oide, H. (Contributor), Ojeda, M. L. (Contributor), Okawa, H. (Contributor), Okazaki, Y. (Contributor), O'Keefe, M. W. (Contributor), Okumura, Y. (Contributor), Okuyama, T. (Contributor), Olariu, A. (Contributor), Oleiro Seabra, S. L. F. (Contributor), Olivares Pino, P. S. A. (Contributor), Oliveira Damazio, D. D. (Contributor), Oliver, J. L. (Contributor), Olsson, M. J. R. (Contributor), Olszewski, A. (Contributor), Olszowska, J. (Contributor), Öncel, O. O. (Contributor), O'Neil, D. C. (Contributor), O'neill, A. P. (Contributor), Onofre, A. (Contributor), Onyisi, P. U. E. (Contributor), Oppen, H. (Contributor), Oreamuno Madriz, M. R. G. (Contributor), Oreglia, M. J. (Contributor), Orellana, G. E. (Contributor), Orestano, D. (Contributor), Orlando, N. (Contributor), Orr, R. S. (Contributor), O'Shea, V. (Contributor), Ospanov, R. (Contributor), Otero y Garzon, Y. G. G. (Contributor), Otono, H. (Contributor), Ott, P. S. (Contributor), Ottino, G. J. (Contributor), Ouchrif, M. (Contributor), Ouellette, J. (Contributor), Ould-Saada, F. (Contributor), Ouraou, A. (Contributor), Ouyang, Q. (Contributor), Owen, M. (Contributor), Owen, R. E. (Contributor), Ozcan, V. E. (Contributor), Ozturk, N. (Contributor), Pacalt, J. (Contributor), Pacey, H. A. (Contributor), Pachal, K. (Contributor), Pacheco Pages, P. A. (Contributor), Padilla Aranda, A. C. (Contributor), Pagan Griso, G. S. (Contributor), Palacino, G. (Contributor), Palazzo, S. (Contributor), Palestini, S. (Contributor), Palka, M. (Contributor), Palni, P. (Contributor), Pandini, C. E. (Contributor), Panduro Vazquez, V. J. G. (Contributor), Pani, P. (Contributor), Panizzo, G. (Contributor), Paolozzi, L. (Contributor), Papadatos, C. (Contributor), Papageorgiou, K. (Contributor), Parajuli, S. (Contributor), Paramonov, A. (Contributor), Paraskevopoulos, C. (Contributor), Paredes Hernandez, H. D. (Contributor), Paredes Saenz, S. S. R. (Contributor), Parida, B. (Contributor), Park, T. H. (Contributor), Parker, A. J. (Contributor), Parker, M. A. (Contributor), Parodi, F. (Contributor), Parrish, E. W. (Contributor), Parsons, J. A. (Contributor), Parzefall, U. (Contributor), Pascual Dominguez, D. L. (Contributor), Pascuzzi, V. R. (Contributor), Pasner, J. M. P. (Contributor), Pasquali, F. (Contributor), Pasqualucci, E. (Contributor), Passaggio, S. (Contributor), Pastore, F. (Contributor), Pasuwan, P. (Contributor), Pataraia, S. (Contributor), Pater, J. R. (Contributor), Pathak, A. (Contributor), Patton, J. (Contributor), Pauly, T. (Contributor), Pearkes, J. (Contributor), Pearson, B. (Contributor), Pedersen, M. (Contributor), Pedraza Diaz, D. L. (Contributor), Pedro, R. (Contributor), Peiffer, T. (Contributor), Peleganchuk, S. V. (Contributor), Penc, O. (Contributor), Peng, H. (Contributor), Peralva, B. S. (Contributor), Perego, M. M. (Contributor), Pereira Peixoto, P. A. P. (Contributor), Pereira Sanchez, S. L. (Contributor), Perepelitsa, D. V. (Contributor), Perez Codina, C. E. (Contributor), Peri, F. (Contributor), Perini, L. (Contributor), Pernegger, H. (Contributor), Perrella, S. (Contributor), Perrevoort, A. (Contributor), Peters, K. (Contributor), Peters, R. F. Y. (Contributor), Petersen, B. A. (Contributor), Petersen, T. C. (Contributor), Petit, E. (Contributor), Petousis, V. (Contributor), Petridis, A. (Contributor), Petridou, C. (Contributor), Petroff, P. (Contributor), Petrucci, F. (Contributor), Pettee, M. (Contributor), Pettersson, N. E. (Contributor), Petukhova, K. (Contributor), Peyaud, A. (Contributor), Pezoa, R. (Contributor), Pezzotti, L. (Contributor), Pham, T. (Contributor), Phillips, F. H. (Contributor), Phillips, P. W. (Contributor), Phipps, M. W. (Contributor), Piacquadio, G. (Contributor), Pianori, E. (Contributor), Picazio, A. (Contributor), Pickles, R. H. (Contributor), Piegaia, R. (Contributor), Pietreanu, D. (Contributor), Pilcher, J. E. (Contributor), Pilkington, A. D. (Contributor), Pinamonti, M. (Contributor), Pinfold, J. L. (Contributor), Pitman Donaldson, D. C. (Contributor), Pitt, M. (Contributor), Pizzimento, L. (Contributor), Pizzini, A. (Contributor), Pleier, M. (Contributor), Plesanovs, V. (Contributor), Pleskot, V. (Contributor), Plotnikova, E. (Contributor), Podberezko, P. (Contributor), Poettgen, R. (Contributor), Poggi, R. (Contributor), Poggioli, L. (Contributor), Pogrebnyak, I. (Contributor), Pohl, D. (Contributor), Pokharel, I. (Contributor), Polesello, G. (Contributor), Poley, A. (Contributor), Policicchio, A. (Contributor), Polifka, R. (Contributor), Polini, A. (Contributor), Pollard, C. S. (Contributor), Polychronakos, V. (Contributor), Ponomarenko, D. (Contributor), Pontecorvo, L. (Contributor), Popa, S. (Contributor), Popeneciu, G. A. (Contributor), Portales, L. (Contributor), Portillo Quintero, Q. D. M. (Contributor), Pospisil, S. (Contributor), Potamianos, K. (Contributor), Potrap, I. N. (Contributor), Potter, C. J. (Contributor), Potti, H. (Contributor), Poulsen, T. (Contributor), Poveda, J. (Contributor), Powell, T. D. (Contributor), Pownall, G. (Contributor), Pozo Astigarraga, A. M. E. (Contributor), Pralavorio, P. (Contributor), Prell, S. (Contributor), Price, D. (Contributor), Primavera, M. (Contributor), Proffitt, M. L. (Contributor), Proklova, N. (Contributor), Prokofiev, K. (Contributor), Prokoshin, F. (Contributor), Protopopescu, S. (Contributor), Proudfoot, J. (Contributor), Przybycien, M. (Contributor), Pudzha, D. (Contributor), Puri, A. (Contributor), Puzo, P. (Contributor), Pyatiizbyantseva, D. (Contributor), Qian, J. (Contributor), Qin, Y. (Contributor), Quadt, A. (Contributor), Queitsch-Maitland, M. (Contributor), Racko, M. (Contributor), Ragusa, F. (Contributor), Rahal, G. (Contributor), Raine, J. A. (Contributor), Rajagopalan, S. (Contributor), Ramirez Morales, M. A. (Contributor), Ran, K. (Contributor), Rauch, D. M. (Contributor), Rauscher, F. (Contributor), Rave, S. (Contributor), Ravina, B. (Contributor), Ravinovich, I. (Contributor), Rawling, J. H. (Contributor), Raymond, M. (Contributor), Read, A. L. (Contributor), Readioff, N. P. (Contributor), Reale, M. (Contributor), Rebuzzi, D. M. (Contributor), Redlinger, G. (Contributor), Reeves, K. (Contributor), Reichert, J. (Contributor), Reikher, D. (Contributor), Reiss, A. (Contributor), Rej, A. (Contributor), Rembser, C. (Contributor), Renardi, A. (Contributor), Renda, M. (Contributor), Rendel, M. B. (Contributor), Rennie, A. G. (Contributor), Resconi, S. (Contributor), Resseguie, E. D. (Contributor), Rettie, S. (Contributor), Reynolds, B. (Contributor), Reynolds, E. (Contributor), Rezanova, O. L. (Contributor), Reznicek, P. (Contributor), Ricci, E. (Contributor), Richter, R. (Contributor), Richter, S. (Contributor), Richter-Was, E. (Contributor), Ridel, M. (Contributor), Rieck, P. (Contributor), Rifki, O. (Contributor), Rijssenbeek, M. (Contributor), Rimoldi, A. (Contributor), Rimoldi, M. (Contributor), Rinaldi, L. (Contributor), Rinn, T. T. (Contributor), Ripellino, G. (Contributor), Riu, I. (Contributor), Rivadeneira, P. (Contributor), Rivera Vergara, V. J. C. (Contributor), Rizatdinova, F. (Contributor), Rizvi, E. (Contributor), Rizzi, C. (Contributor), Robertson, S. H. (Contributor), Robin, M. (Contributor), Robinson, D. (Contributor), Robles Gajardo, G. C. M. (Contributor), Robles Manzano, M. M. (Contributor), Robson, A. (Contributor), Rocchi, A. (Contributor), Rocco, E. (Contributor), Roda, C. (Contributor), Rodriguez Bosca, B. S. (Contributor), Rodríguez Vera, V. A. M. (Contributor), Roe, S. (Contributor), Roggel, J. (Contributor), Røhne, O. (Contributor), Röhrig, R. (Contributor), Rojas, R. A. (Contributor), Roland, B. (Contributor), Roland, C. P. A. (Contributor), Roloff, J. (Contributor), Romaniouk, A. (Contributor), Romano, M. (Contributor), Rompotis, N. (Contributor), Ronzani, M. (Contributor), Roos, L. (Contributor), Rosati, S. (Contributor), Rosin, G. (Contributor), Rosser, B. J. (Contributor), Rossi, E. (Contributor), Rossi, L. P. (Contributor), Rossini, L. (Contributor), Rosten, R. (Contributor), Rotaru, M. (Contributor), Rottler, B. (Contributor), Rousseau, D. (Contributor), Rovelli, G. (Contributor), Roy, A. (Contributor), Roy, D. (Contributor), Rozanov, A. (Contributor), Rozen, Y. (Contributor), Ruan, X. (Contributor), Ruggeri, T. A. (Contributor), Rühr, F. (Contributor), Ruiz-Martinez, A. (Contributor), Rummler, A. (Contributor), Rurikova, Z. (Contributor), Rusakovich, N. A. (Contributor), Russell, H. L. (Contributor), Rustige, L. (Contributor), Rutherfoord, J. P. (Contributor), Rüttinger, E. M. (Contributor), Rybar, M. (Contributor), Rybkin, G. (Contributor), Rye, E. B. (Contributor), Ryzhov, A. (Contributor), Sabater Iglesias, I. J. A. (Contributor), Sabatini, P. (Contributor), Sabetta, L. (Contributor), Sacerdoti, S. (Contributor), Sadrozinski, H. (Contributor), Sadykov, R. (Contributor), Safai Tehrani, T. F. (Contributor), Safarzadeh Samani, S. B. (Contributor), Safdari, M. (Contributor), Saha, P. (Contributor), Saha, S. (Contributor), Sahinsoy, M. (Contributor), Sahu, A. (Contributor), Saimpert, M. (Contributor), Saito, M. (Contributor), Saito, T. (Contributor), Sakamoto, H. (Contributor), Salamani, D. (Contributor), Salamanna, G. (Contributor), Salnikov, A. (Contributor), Salt, J. (Contributor), Salvador Salas, S. A. (Contributor), Salvatore, D. (Contributor), Salvatore, F. (Contributor), Salvucci, A. (Contributor), Salzburger, A. (Contributor), Samarati, J. (Contributor), Sammel, D. (Contributor), Sampsonidis, D. (Contributor), Sampsonidou, D. (Contributor), Sánchez, J. (Contributor), Sanchez Pineda, P. A. (Contributor), Sandaker, H. (Contributor), Sander, C. O. (Contributor), Sanderswood, I. G. (Contributor), Sandhoff, M. (Contributor), Sandoval, C. (Contributor), Sankey, D. P. C. (Contributor), Sannino, M. (Contributor), Sano, Y. (Contributor), Sansoni, A. (Contributor), Santoni, C. (Contributor), Santos, H. (Contributor), Santpur, S. N. (Contributor), Santra, A. (Contributor), Saoucha, K. A. (Contributor), Sapronov, A. (Contributor), Saraiva, J. G. (Contributor), Sasaki, O. (Contributor), Sato, K. (Contributor), Sauerburger, F. (Contributor), Sauvan, E. (Contributor), Savard, P. (Contributor), Sawada, R. (Contributor), Sawyer, C. (Contributor), Sawyer, L. (Contributor), Sayago Galvan, G. I. (Contributor), Sbarra, C. (Contributor), Sbrizzi, A. (Contributor), Scanlon, T. (Contributor), Schaarschmidt, J. (Contributor), Schacht, P. (Contributor), Schaefer, D. (Contributor), Schaefer, L. (Contributor), Schaepe, S. (Contributor), Schäfer, U. (Contributor), Schaffer, A. C. (Contributor), Schaile, D. (Contributor), Schamberger, R. D. (Contributor), Schanet, E. (Contributor), Scharf, C. (Contributor), Scharmberg, N. (Contributor), Schegelsky, V. A. (Contributor), Scheirich, D. (Contributor), Schenck, F. (Contributor), Schernau, M. (Contributor), Schiavi, C. (Contributor), Schildgen, L. K. (Contributor), Schillaci, Z. M. (Contributor), Schioppa, E. J. (Contributor), Schioppa, M. (Contributor), Schleicher, K. E. (Contributor), Schlenker, S. (Contributor), Schmidt-Sommerfeld, K. R. (Contributor), Schmieden, K. (Contributor), Schmitt, C. (Contributor), Schmitt, S. (Contributor), Schmoeckel, J. C. (Contributor), Schoeffel, L. (Contributor), Schoening, A. (Contributor), Scholer, P. G. (Contributor), Schopf, E. (Contributor), Schott, M. (Contributor), Schouwenberg, J. F. P. (Contributor), Schovancova, J. (Contributor), Schramm, S. (Contributor), Schroeder, F. (Contributor), Schulte, A. (Contributor), Schultz-Coulon, H. (Contributor), Schumacher, M. (Contributor), Schumm, B. A. (Contributor), Schune, P. (Contributor), Schwartzman, A. (Contributor), Schwarz, T. A. (Contributor), Schwemling, P. (Contributor), Schwienhorst, R. (Contributor), Sciandra, A. (Contributor), Sciolla, G. (Contributor), Scornajenghi, M. (Contributor), Scuri, F. (Contributor), Scutti, F. (Contributor), Scyboz, L. M. (Contributor), Sebastiani, C. D. (Contributor), Seema, P. (Contributor), Seidel, S. C. (Contributor), Seiden, A. (Contributor), Seidlitz, B. D. (Contributor), Seiss, T. (Contributor), Seitz, C. (Contributor), Seixas, J. M. (Contributor), Sekhniaidze, G. (Contributor), Sekula, S. J. (Contributor), Semprini-Cesari, N. (Contributor), Sen, S. (Contributor), Serfon, C. (Contributor), Serin, L. (Contributor), Serkin, L. (Contributor), Sessa, M. (Contributor), Severini, H. (Contributor), Sevova, S. (Contributor), Sforza, F. (Contributor), Sfyrla, A. (Contributor), Shabalina, E. (Contributor), Shahinian, J. D. (Contributor), Shaikh, N. W. (Contributor), Shaked Renous, R. D. (Contributor), Shan, L. Y. (Contributor), Shapiro, M. (Contributor), Sharma, A. (Contributor), Sharma, A. S. (Contributor), Shatalov, P. B. (Contributor), Shaw, K. (Contributor), Shaw, S. M. (Contributor), Shehade, M. (Contributor), Shen, Y. (Contributor), Sherman, A. D. (Contributor), Sherwood, P. (Contributor), Shi, L. (Contributor), Shimizu, S. (Contributor), Shimmin, C. O. (Contributor), Shimogama, Y. (Contributor), Shimojima, M. (Contributor), Shipsey, I. P. J. (Contributor), Shirabe, S. (Contributor), Shiyakova, M. (Contributor), Shlomi, J. (Contributor), Shmeleva, A. (Contributor), Shochet, M. J. (Contributor), Shojaii, J. (Contributor), Shope, D. R. (Contributor), Shrestha, S. (Contributor), Shrif, E. M. (Contributor), Shulga, E. (Contributor), Sicho, P. (Contributor), Sickles, A. M. (Contributor), Sideras Haddad, H. E. (Contributor), Sidiropoulou, O. (Contributor), Sidoti, A. (Contributor), Siegert, F. (Contributor), Sijacki, D. (Contributor), Silva, M. J. (Contributor), Silva Oliveira, O. M. V. (Contributor), Silverstein, S. B. (Contributor), Simion, S. (Contributor), Simoniello, R. (Contributor), Simpson-allsop, C. J. (Contributor), Simsek, S. (Contributor), Sinervo, P. (Contributor), Sinetckii, V. (Contributor), Singh, S. (Contributor), Sioli, M. (Contributor), Siral, I. (Contributor), Sivoklokov, S. Y. (Contributor), Sjölin, J. (Contributor), Skaf, A. (Contributor), Skorda, E. (Contributor), Skubic, P. (Contributor), Slawinska, M. (Contributor), Sliwa, K. (Contributor), Slovak, R. (Contributor), Smakhtin, V. (Contributor), Smart, B. H. (Contributor), Smiesko, J. (Contributor), Smirnov, N. (Contributor), Smirnov, S. Y. (Contributor), Smirnov, Y. (Contributor), Smirnova, L. N. (Contributor), Smirnova, O. (Contributor), Smith, E. A. (Contributor), Smith, H. A. (Contributor), Smizanska, M. (Contributor), Smolek, K. (Contributor), Smykiewicz, A. (Contributor), Snesarev, A. A. (Contributor), Snoek, H. L. (Contributor), Snyder, I. M. (Contributor), Snyder, S. (Contributor), Sobie, R. (Contributor), Soffer, A. (Contributor), Søgaard, A. (Contributor), Sohns, F. (Contributor), Solans Sanchez, S. C. A. (Contributor), Soldatov, E. Y. (Contributor), Soldevila, U. (Contributor), Solodkov, A. A. (Contributor), Soloshenko, A. (Contributor), Solovyanov, O. V. (Contributor), Solovyev, V. (Contributor), Sommer, P. (Contributor), Son, H. (Contributor), Song, W. (Contributor), Song, W. Y. (Contributor), Sopczak, A. (Contributor), Sopio, A. L. (Contributor), Sopkova, F. (Contributor), Sottocornola, S. (Contributor), Soualah, R. (Contributor), Soukharev, A. M. (Contributor), South, D. (Contributor), Spagnolo, S. (Contributor), Spalla, M. (Contributor), Spangenberg, M. (Contributor), Spanò, F. (Contributor), Sperlich, D. (Contributor), Spieker, T. M. (Contributor), Spigo, G. (Contributor), Spina, M. (Contributor), Spiteri, D. P. (Contributor), Spousta, M. (Contributor), Stabile, A. (Contributor), Stamas, B. L. (Contributor), Stamen, R. (Contributor), Stamenkovic, M. (Contributor), Stanecka, E. (Contributor), Stanislaus, B. (Contributor), Stanitzki, M. M. (Contributor), Stankaityte, M. (Contributor), Stapf, B. (Contributor), Starchenko, E. A. (Contributor), Stark, G. H. (Contributor), Stark, J. (Contributor), Staroba, P. (Contributor), Starovoitov, P. (Contributor), Stärz, S. (Contributor), Staszewski, R. (Contributor), Stavropoulos, G. (Contributor), Stegler, M. (Contributor), Steinberg, P. (Contributor), Steinhebel, A. L. (Contributor), Stelzer, B. (Contributor), Stelzer, H. J. (Contributor), Stelzer-Chilton, O. (Contributor), Stenzel, H. (Contributor), Stevenson, T. J. (Contributor), Stewart, G. A. (Contributor), Stockton, M. C. (Contributor), Stoicea, G. (Contributor), Stolarski, M. (Contributor), Stonjek, S. (Contributor), Straessner, A. (Contributor), Strandberg, J. (Contributor), Strandberg, S. (Contributor), Strauss, M. (Contributor), Strebler, T. (Contributor), Strizenec, P. (Contributor), Ströhmer, R. (Contributor), Strom, D. M. (Contributor), Stroynowski, R. (Contributor), Strubig, A. (Contributor), Stucci, S. A. (Contributor), Stugu, B. (Contributor), Stupak, J. (Contributor), Styles, N. A. (Contributor), Su, D. (Contributor), Su, W. (Contributor), Su, X. (Contributor), Sulin, V. V. (Contributor), Sullivan, M. J. (Contributor), Sultan, D. M. S. (Contributor), Sultansoy, S. (Contributor), Sumida, T. (Contributor), Sun, S. (Contributor), Sun, X. (Contributor), Suruliz, K. (Contributor), Suster, C. J. E. (Contributor), Sutton, M. R. (Contributor), Suzuki, S. (Contributor), Svatos, M. (Contributor), Swiatlowski, M. (Contributor), Swift, S. P. (Contributor), Swirski, T. (Contributor), Sydorenko, A. (Contributor), Sykora, I. (Contributor), Sykora, M. (Contributor), Sykora, T. (Contributor), Ta, D. (Contributor), Tackmann, K. (Contributor), Taenzer, J. (Contributor), Taffard, A. (Contributor), Tafirout, R. (Contributor), Tagiev, E. (Contributor), Takashima, R. (Contributor), Takeda, K. (Contributor), Takeshita, T. (Contributor), Takeva, E. P. (Contributor), Takubo, Y. (Contributor), Talby, M. (Contributor), Talyshev, A. A. (Contributor), Tam, K. C. (Contributor), Tamir, N. M. (Contributor), Tanaka, J. (Contributor), Tanaka, R. (Contributor), Tapia Araya, A. S. (Contributor), Tapprogge, S. (Contributor), Tarek Abouelfadl Mohamed, A. M. A. (Contributor), Tarem, S. (Contributor), Tariq, K. (Contributor), Tarna, G. (Contributor), Tartarelli, G. F. (Contributor), Tas, P. (Contributor), Tasevsky, M. (Contributor), Tashiro, T. (Contributor), Tassi, E. (Contributor), Tavares Delgado, D. A. (Contributor), Tayalati, Y. (Contributor), Taylor, A. J. (Contributor), Taylor, G. N. (Contributor), Taylor, W. (Contributor), Teagle, H. (Contributor), Tee, A. S. (Contributor), Teixeira De Lima, D. L. R. (Contributor), Teixeira-Dias, P. (Contributor), Ten Kate, K. H. (Contributor), Teoh, J. J. (Contributor), Terada, S. (Contributor), Terashi, K. (Contributor), Terron, J. (Contributor), Terzo, S. (Contributor), Testa, M. (Contributor), Teuscher, R. J. (Contributor), Thais, S. J. (Contributor), Themistokleous, N. (Contributor), Theveneaux-Pelzer, T. (Contributor), Thiele, F. (Contributor), Thomas, D. W. (Contributor), Thomas, J. O. (Contributor), Thomas, J. P. (Contributor), Thompson, E. A. (Contributor), Thompson, P. D. (Contributor), Thomson, E. (Contributor), Thorpe, E. J. (Contributor), Ticse Torres, T. R. E. (Contributor), Tikhomirov, V. O. (Contributor), Tikhonov, Y. A. (Contributor), Timoshenko, S. (Contributor), Tipton, P. (Contributor), Tisserant, S. (Contributor), Todome, K. (Contributor), Todorova-Nova, S. (Contributor), Todt, S. (Contributor), Tojo, J. (Contributor), Tokár, S. (Contributor), Tokushuku, K. (Contributor), Tolley, E. (Contributor), Tombs, R. (Contributor), Tomiwa, K. G. (Contributor), Tomoto, M. (Contributor), Tompkins, L. (Contributor), Tornambe, P. (Contributor), Torrence, E. (Contributor), Torres, H. (Contributor), Torró Pastor, P. E. (Contributor), Tosciri, C. (Contributor), Toth, J. (Contributor), Tovey, D. R. (Contributor), Traeet, A. (Contributor), Treado, C. J. (Contributor), Trefzger, T. (Contributor), Tresoldi, F. (Contributor), Tricoli, A. (Contributor), Trigger, I. M. (Contributor), Trincaz-Duvoid, S. (Contributor), Trischuk, D. A. (Contributor), Trischuk, W. (Contributor), Trocmé, B. (Contributor), Trofymov, A. (Contributor), Troncon, C. (Contributor), Trovato, F. (Contributor), Truong, L. (Contributor), Trzebinski, M. (Contributor), Trzupek, A. (Contributor), Tsai, F. (Contributor), Tseng, J. (Contributor), Tsiareshka, P. V. (Contributor), Tsirigotis, A. (Contributor), Tsiskaridze, V. (Contributor), Tskhadadze, E. G. (Contributor), Tsopoulou, M. (Contributor), Tsukerman, I. I. (Contributor), Tsulaia, V. (Contributor), Tsuno, S. (Contributor), Tsybychev, D. (Contributor), Tu, Y. (Contributor), Tudorache, A. (Contributor), Tudorache, V. (Contributor), Tulbure, T. T. (Contributor), Tuna, A. N. (Contributor), Turchikhin, S. (Contributor), Turgeman, D. (Contributor), Turk Cakir, C. I. (Contributor), Turner, R. J. (Contributor), Turra, R. (Contributor), Tuts, P. M. (Contributor), Tzamarias, S. (Contributor), Tzovara, E. (Contributor), Uchida, K. (Contributor), Ukegawa, F. (Contributor), Unal, G. (Contributor), Unal, M. (Contributor), Undrus, A. (Contributor), Unel, G. (Contributor), Ungaro, F. C. (Contributor), Unno, Y. (Contributor), Uno, K. (Contributor), Urban, J. (Contributor), Urquijo, P. (Contributor), Usai, G. (Contributor), Uysal, Z. (Contributor), Vacek, V. (Contributor), Vachon, B. (Contributor), Vadla, K. O. H. (Contributor), Vafeiadis, T. (Contributor), Vaidya, A. (Contributor), Valderanis, C. (Contributor), Valdes Santurio, S. E. (Contributor), Valente, M. (Contributor), Valentinetti, S. (Contributor), Valero, A. (Contributor), Valéry, L. (Contributor), Vallance, R. A. (Contributor), Vallier, A. (Contributor), Valls Ferrer, F. J. A. (Contributor), Van Daalen, D. T. R. (Contributor), Van Gemmeren, G. P. (Contributor), Van Stroud, S. S. (Contributor), Van Vulpen, V. I. (Contributor), Vanadia, M. (Contributor), Vandelli, W. (Contributor), Vandenbroucke, M. (Contributor), Vandewall, E. R. (Contributor), Vaniachine, A. (Contributor), Vannicola, D. (Contributor), Vari, R. (Contributor), Varnes, E. W. (Contributor), Varni, C. (Contributor), Varol, T. (Contributor), Varouchas, D. (Contributor), Varvell, K. E. (Contributor), Vasile, M. E. (Contributor), Vasquez, G. A. (Contributor), Vazeille, F. (Contributor), Vazquez Furelos, F. D. (Contributor), Vazquez Schroeder, S. T. (Contributor), Veatch, J. (Contributor), Vecchio, V. (Contributor), Veen, M. J. (Contributor), Veloce, L. M. (Contributor), Veloso, F. (Contributor), Veneziano, S. (Contributor), Ventura, A. (Contributor), Verbytskyi, A. (Contributor), Vercesi, V. (Contributor), Verducci, M. (Contributor), Vergel Infante, I. C. M. (Contributor), Vergis, C. (Contributor), Verkerke, W. (Contributor), Vermeulen, A. T. (Contributor), Vermeulen, J. C. (Contributor), Vernieri, C. (Contributor), Vetterli, M. C. (Contributor), Viaux Maira, M. N. (Contributor), Vickey, T. (Contributor), Vickey Boeriu, B. O. E. (Contributor), Viehhauser, G. H. A. (Contributor), Vigani, L. (Contributor), Villa, M. (Contributor), Villaplana Perez, P. M. (Contributor), Villhauer, E. M. (Contributor), Vilucchi, E. (Contributor), Vincter, M. G. (Contributor), Virdee, G. S. (Contributor), Vishwakarma, A. (Contributor), Vittori, C. (Contributor), Vivarelli, I. (Contributor), Vogel, M. (Contributor), Vokac, P. (Contributor), von Buddenbrock, B. S. E. (Contributor), Von Toerne, T. E. (Contributor), Vorobel, V. (Contributor), Vorobev, K. (Contributor), Vos, M. (Contributor), Vossebeld, J. H. (Contributor), Vozak, M. (Contributor), Vranjes, N. (Contributor), Vranjes Milosavljevic, M. M. (Contributor), Vrba, V. (Contributor), Vreeswijk, M. (Contributor), Vuillermet, R. (Contributor), Vukotic, I. (Contributor), Wada, S. (Contributor), Wagner, P. (Contributor), Wagner, W. (Contributor), Wagner-Kuhr, J. (Contributor), Wahdan, S. (Contributor), Wahlberg, H. (Contributor), Wakasa, R. (Contributor), Walbrecht, V. M. (Contributor), Walder, J. (Contributor), Walker, R. (Contributor), Walker, S. D. (Contributor), Walkowiak, W. (Contributor), Wallangen, V. (Contributor), Wang, A. M. (Contributor), Wang, A. Z. (Contributor), Wang, C. (Contributor), Wang, F. (Contributor), Wang, H. (Contributor), Wang, J. (Contributor), Wang, P. (Contributor), Wang, Q. (Contributor), Wang, R. (Contributor), Wang, R. (Contributor), Wang, S. M. (Contributor), Wang, W. T. (Contributor), Wang, W. (Contributor), Wang, W. X. (Contributor), Wang, Y. (Contributor), Wang, Z. (Contributor), Wanotayaroj, C. (Contributor), Warburton, A. (Contributor), Ward, C. P. (Contributor), Wardrope, D. R. (Contributor), Warrack, N. (Contributor), Watson, A. T. (Contributor), Watson, M. F. (Contributor), Watts, G. (Contributor), Waugh, B. M. (Contributor), Webb, A. F. (Contributor), Weber, C. (Contributor), Weber, M. S. (Contributor), Weber, S. A. (Contributor), Weber, S. M. (Contributor), Weidberg, A. R. (Contributor), Weingarten, J. (Contributor), Weirich, M. (Contributor), Weiser, C. (Contributor), Wells, P. S. (Contributor), Wenaus, T. (Contributor), Wendland, B. (Contributor), Wengler, T. (Contributor), Wenig, S. (Contributor), Wermes, N. (Contributor), Wessels, M. (Contributor), Weston, T. D. (Contributor), Whalen, K. (Contributor), Wharton, A. M. (Contributor), White, A. S. (Contributor), White, A. (Contributor), White, M. J. (Contributor), Whiteson, D. (Contributor), Whitmore, B. W. (Contributor), Wiedenmann, W. (Contributor), Wiel, C. (Contributor), Wielers, M. (Contributor), Wieseotte, N. (Contributor), Wiglesworth, C. (Contributor), Wiik-Fuchs, L. A. M. (Contributor), Wilkens, H. G. (Contributor), Wilkins, L. J. (Contributor), Williams, H. H. (Contributor), Williams, S. (Contributor), Willocq, S. (Contributor), Windischhofer, P. J. (Contributor), Wingerter-Seez, I. (Contributor), Winkels, E. (Contributor), Winklmeier, F. (Contributor), Winter, B. T. (Contributor), Wittgen, M. (Contributor), Wobisch, M. (Contributor), Wolf, A. (Contributor), Wölker, R. (Contributor), Wollrath, J. (Contributor), Wolter, M. W. (Contributor), Wolters, H. (Contributor), Wong, V. W. S. (Contributor), Woods, N. L. (Contributor), Worm, S. D. (Contributor), Wosiek, B. K. (Contributor), Woźniak, K. W. (Contributor), Wraight, K. (Contributor), Wu, S. L. (Contributor), Wu, X. (Contributor), Wu, Y. (Contributor), Wuerzinger, J. (Contributor), Wyatt, T. R. (Contributor), Wynne, B. M. (Contributor), Xella, S. (Contributor), Xia, L. (Contributor), Xiang, J. (Contributor), Xiao, X. (Contributor), Xie, X. (Contributor), Xiotidis, I. (Contributor), Xu, D. (Contributor), Xu, H. (Contributor), Xu, L. (Contributor), Xu, T. (Contributor), Xu, W. (Contributor), Xu, Z. (Contributor), Yabsley, B. (Contributor), Yacoob, S. (Contributor), Yajima, K. (Contributor), Yallup, D. P. (Contributor), Yamaguchi, N. (Contributor), Yamaguchi, Y. (Contributor), Yamamoto, A. (Contributor), Yamatani, M. (Contributor), Yamazaki, T. (Contributor), Yamazaki, Y. (Contributor), Yan, J. (Contributor), Yan, Z. (Contributor), Yang, H. J. (Contributor), Yang, H. T. (Contributor), Yang, S. (Contributor), Yang, T. (Contributor), Yang, X. (Contributor), Yang, Y. (Contributor), Yang, Z. (Contributor), Yao, W. (Contributor), Yap, Y. C. (Contributor), Yasu, Y. (Contributor), Yatsenko, E. (Contributor), Ye, H. (Contributor), Ye, J. (Contributor), Ye, S. (Contributor), Yeletskikh, I. (Contributor), Yexley, M. R. (Contributor), Yigitbasi, E. (Contributor), Yin, P. (Contributor), Yorita, K. (Contributor), Yoshihara, K. (Contributor), Young, C. J. S. (Contributor), Young, C. (Contributor), Yu, J. (Contributor), Yuan, R. (Contributor), Yue, X. (Contributor), Zaazoua, M. (Contributor), Zabinski, B. (Contributor), Zacharis, G. (Contributor), Zaffaroni, E. (Contributor), Zahreddine, J. (Contributor), Zaitsev, A. M. (Contributor), Zakareishvili, T. (Contributor), Zakharchuk, N. (Contributor), Zambito, S. (Contributor), Zanzi, D. (Contributor), Zaripovas, D. R. (Contributor), Zeißner, S. V. (Contributor), Zeitnitz, C. (Contributor), Zemaityte, G. (Contributor), Zeng, J. C. (Contributor), Zenin, O. (Contributor), Ženiš, T. (Contributor), Zerwas, D. (Contributor), Zgubič, M. (Contributor), Zhang, B. (Contributor), Zhang, D. F. (Contributor), Zhang, G. (Contributor), Zhang, J. (Contributor), Zhang, K. (Contributor), Zhang, L. (Contributor), Zhang, M. (Contributor), Zhang, R. (Contributor), Zhang, S. (Contributor), Zhang, X. (Contributor), Zhang, Y. (Contributor), Zhang, Z. (Contributor), Zhao, P. (Contributor), Zhao, Z. (Contributor), Zhemchugov, A. (Contributor), Zheng, Z. (Contributor), Zhong, D. (Contributor), Zhou, B. (Contributor), Zhou, C. (Contributor), Zhou, H. (Contributor), Zhou, M. S. (Contributor), Zhou, M. (Contributor), Zhou, N. (Contributor), Zhou, Y. (Contributor), Zhu, C. G. (Contributor), Zhu, C. (Contributor), Zhu, H. L. (Contributor), Zhu, H. (Contributor), Zhu, J. (Contributor), Zhu, Y. (Contributor), Zhuang, X. (Contributor), Zhukov, K. (Contributor), Zhulanov, V. (Contributor), Zieminska, D. (Contributor), Zimine, N. I. (Contributor), Zimmermann, S. (Contributor), Zinonos, Z. (Contributor), Ziolkowski, M. (Contributor), Živković, L. (Contributor), Zobernig, G. (Contributor), Zoccoli, A. (Contributor), Zoch, K. (Contributor), Zorbas, T. G. (Contributor), Zou, R. (Contributor), Zwalinski, L. (Contributor) & Collaboration, T. A. (Creator), HEPData, 2020 Centrality and rapidity dependence of inclusive jet production in $\sqrt{s_\mathrm{NN}} = 5.02$ TeV proton--lead collisions with the ATLAS detector Aad, G. (Contributor), Abbott, B. (Contributor), Abdallah, J. (Contributor), Abdel Khalek, K. S. (Contributor), Abdinov, O. (Contributor), Aben, R. (Contributor), Abi, B. (Contributor), Abolins, M. (Contributor), AbouZeid, O. S. (Contributor), Abramowicz, H. (Contributor), Abreu, H. (Contributor), Abreu, R. (Contributor), Abulaiti, Y. (Contributor), Acharya, B. S. (Contributor), Adamczyk, L. (Contributor), Adams, D. L. (Contributor), Adelman, J. (Contributor), Adomeit, S. (Contributor), Adye, T. (Contributor), Agatonovic-Jovin, T. (Contributor), Aguilar-Saavedra, J. A. (Contributor), Agustoni, M. (Contributor), Ahlen, S. P. (Contributor), Ahmadov, F. (Contributor), Aielli, G. (Contributor), Akerstedt, H. (Contributor), Åkesson, T. P. A. (Contributor), Akimoto, G. (Contributor), Akimov, A. V. (Contributor), Alberghi, G. L. (Contributor), Albert, J. (Contributor), Albrand, S. (Contributor), Alconada Verzini, V. M. J. (Contributor), Aleksa, M. (Contributor), Aleksandrov, I. N. (Contributor), Alexa, C. (Contributor), Alexander, G. (Contributor), Alexandre, G. (Contributor), Alexopoulos, T. (Contributor), Alhroob, M. (Contributor), Alimonti, G. (Contributor), Alio, L. (Contributor), Alison, J. (Contributor), Allbrooke, B. M. M. (Contributor), Allison, L. J. (Contributor), Allport, P. P. (Contributor), Almond, J. (Contributor), Aloisio, A. (Contributor), Alonso, A. (Contributor), Alonso, F. (Contributor), Alpigiani, C. (Contributor), Altheimer, A. (Contributor), Alvarez Gonzalez, G. B. (Contributor), Alviggi, M. G. (Contributor), Amako, K. (Contributor), Amaral Coutinho, C. Y. (Contributor), Amelung, C. (Contributor), Amidei, D. (Contributor), Amor Dos Santos, D. S. S. P. (Contributor), Amorim, A. (Contributor), Amoroso, S. (Contributor), Amram, N. (Contributor), Amundsen, G. (Contributor), Anastopoulos, C. (Contributor), Ancu, L. S. (Contributor), Andari, N. (Contributor), Andeen, T. (Contributor), Anders, C. F. (Contributor), Anders, G. (Contributor), Anderson, K. J. (Contributor), Andreazza, A. (Contributor), Andrei, V. (Contributor), Anduaga, X. S. (Contributor), Angelidakis, S. (Contributor), Angelozzi, I. (Contributor), Anger, P. (Contributor), Angerami, A. (Contributor), Anghinolfi, F. (Contributor), Anisenkov, A. V. (Contributor), Anjos, N. (Contributor), Annovi, A. (Contributor), Antonaki, A. (Contributor), Antonelli, M. (Contributor), Antonov, A. (Contributor), Antos, J. (Contributor), Anulli, F. (Contributor), Aoki, M. (Contributor), Aperio Bella, B. L. (Contributor), Apolle, R. (Contributor), Arabidze, G. (Contributor), Aracena, I. (Contributor), Arai, Y. (Contributor), Araque, J. P. (Contributor), Arce, A. T. H. (Contributor), Arguin, J. (Contributor), Argyropoulos, S. (Contributor), Arik, M. (Contributor), Armbruster, A. J. (Contributor), Arnaez, O. (Contributor), Arnal, V. (Contributor), Arnold, H. (Contributor), Arratia, M. (Contributor), Arslan, O. (Contributor), Artamonov, A. (Contributor), Artoni, G. (Contributor), Asai, S. (Contributor), Asbah, N. (Contributor), Ashkenazi, A. (Contributor), Åsman, B. (Contributor), Asquith, L. (Contributor), Assamagan, K. (Contributor), Astalos, R. (Contributor), Atkinson, M. (Contributor), Atlay, N. B. (Contributor), Auerbach, B. (Contributor), Augsten, K. (Contributor), Aurousseau, M. (Contributor), Avolio, G. (Contributor), Azuelos, G. (Contributor), Azuma, Y. (Contributor), Baak, M. A. (Contributor), Baas, A. E. (Contributor), Bacci, C. (Contributor), Bachacou, H. (Contributor), Bachas, K. (Contributor), Backes, M. (Contributor), Backhaus, M. (Contributor), Backus Mayes, M. J. (Contributor), Badescu, E. (Contributor), Bagiacchi, P. (Contributor), Bagnaia, P. (Contributor), Bai, Y. (Contributor), Bain, T. (Contributor), Baines, J. T. (Contributor), Baker, O. K. (Contributor), Balek, P. (Contributor), Balli, F. (Contributor), Banas, E. (Contributor), Banerjee, S. (Contributor), Bannoura, A. A. E. (Contributor), Bansal, V. (Contributor), Bansil, H. S. (Contributor), Barak, L. (Contributor), Baranov, S. P. (Contributor), Barberio, E. L. (Contributor), Barberis, D. (Contributor), Barbero, M. (Contributor), Barillari, T. (Contributor), Barisonzi, M. (Contributor), Barklow, T. (Contributor), Barlow, N. (Contributor), Barnett, B. M. (Contributor), Barnett, R. M. (Contributor), Barnovska, Z. (Contributor), Baroncelli, A. (Contributor), Barone, G. (Contributor), Barr, A. J. (Contributor), Barreiro, F. (Contributor), Barreiro Guimarães da Costa, G. D. C. J. (Contributor), Bartoldus, R. (Contributor), Barton, A. E. (Contributor), Bartos, P. (Contributor), Bartsch, V. (Contributor), Bassalat, A. (Contributor), Basye, A. (Contributor), Bates, R. L. (Contributor), Batley, J. R. (Contributor), Battaglia, M. (Contributor), Battistin, M. (Contributor), Bauer, F. (Contributor), Bawa, H. S. (Contributor), Beattie, M. D. (Contributor), Beau, T. (Contributor), Beauchemin, P. H. (Contributor), Beccherle, R. (Contributor), Bechtle, P. (Contributor), Beck, H. P. (Contributor), Becker, K. (Contributor), Becker, S. (Contributor), Beckingham, M. (Contributor), Becot, C. (Contributor), Beddall, A. J. (Contributor), Beddall, A. (Contributor), Bedikian, S. (Contributor), Bednyakov, V. A. (Contributor), Bee, C. P. (Contributor), Beemster, L. J. (Contributor), Beermann, T. A. (Contributor), Begel, M. (Contributor), Behr, K. (Contributor), Belanger-Champagne, C. (Contributor), Bell, P. J. (Contributor), Bell, W. H. (Contributor), Bella, G. (Contributor), Bellagamba, L. (Contributor), Bellerive, A. (Contributor), Bellomo, M. (Contributor), Belotskiy, K. (Contributor), Beltramello, O. (Contributor), Benary, O. (Contributor), Benchekroun, D. (Contributor), Bendtz, K. (Contributor), Benekos, N. (Contributor), Benhammou, Y. (Contributor), Benhar Noccioli, N. E. (Contributor), Benitez Garcia, G. J. A. (Contributor), Benjamin, D. P. (Contributor), Bensinger, J. R. (Contributor), Benslama, K. (Contributor), Bentvelsen, S. (Contributor), Berge, D. (Contributor), Bergeaas Kuutmann, K. E. (Contributor), Berger, N. (Contributor), Berghaus, F. (Contributor), Beringer, J. (Contributor), Bernard, C. (Contributor), Bernat, P. (Contributor), Bernius, C. (Contributor), Bernlochner, F. U. (Contributor), Berry, T. (Contributor), Berta, P. (Contributor), Bertella, C. (Contributor), Bertoli, G. (Contributor), Bertolucci, F. (Contributor), Bertsche, C. (Contributor), Bertsche, D. (Contributor), Besana, M. I. (Contributor), Besjes, G. J. (Contributor), Bessidskaia Bylund, B. O. (Contributor), Bessner, M. (Contributor), Besson, N. (Contributor), Betancourt, C. (Contributor), Bethke, S. (Contributor), Bhimji, W. (Contributor), Bianchi, R. M. (Contributor), Bianchini, L. (Contributor), Bianco, M. (Contributor), Biebel, O. (Contributor), Bieniek, S. P. (Contributor), Bierwagen, K. (Contributor), Biesiada, J. (Contributor), Biglietti, M. (Contributor), Bilbao De Mendizabal, D. M. J. (Contributor), Bilokon, H. (Contributor), Bindi, M. (Contributor), Binet, S. (Contributor), Bingul, A. (Contributor), Bini, C. (Contributor), Black, C. W. (Contributor), Black, J. E. (Contributor), Black, K. M. (Contributor), Blackburn, D. (Contributor), Blair, R. E. (Contributor), Blanchard, J. (Contributor), Blazek, T. (Contributor), Bloch, I. (Contributor), Blocker, C. (Contributor), Blum, W. (Contributor), Blumenschein, U. (Contributor), Bobbink, G. J. (Contributor), Bobrovnikov, V. S. (Contributor), Bocchetta, S. S. (Contributor), Bocci, A. (Contributor), Bock, C. (Contributor), Boddy, C. R. (Contributor), Boehler, M. (Contributor), Boek, T. T. (Contributor), Bogaerts, J. A. (Contributor), Bogdanchikov, A. G. (Contributor), Bogouch, A. (Contributor), Bohm, C. (Contributor), Bohm, J. (Contributor), Boisvert, V. (Contributor), Bold, T. (Contributor), Boldea, V. (Contributor), Boldyrev, A. S. (Contributor), Bomben, M. (Contributor), Bona, M. (Contributor), Boonekamp, M. (Contributor), Borisov, A. (Contributor), Borissov, G. (Contributor), Borri, M. (Contributor), Borroni, S. (Contributor), Bortfeldt, J. (Contributor), Bortolotto, V. (Contributor), Bos, K. (Contributor), Boscherini, D. (Contributor), Bosman, M. (Contributor), Boterenbrood, H. (Contributor), Boudreau, J. (Contributor), Bouffard, J. (Contributor), Bouhova-Thacker, E. V. (Contributor), Boumediene, D. (Contributor), Bourdarios, C. (Contributor), Bousson, N. (Contributor), Boutouil, S. (Contributor), Boveia, A. (Contributor), Boyd, J. (Contributor), Boyko, I. R. (Contributor), Bracinik, J. (Contributor), Brandt, A. (Contributor), Brandt, G. (Contributor), Brandt, O. (Contributor), Bratzler, U. (Contributor), Brau, B. (Contributor), Brau, J. E. (Contributor), Braun, H. M. (Contributor), Brazzale, S. F. (Contributor), Brelier, B. (Contributor), Brendlinger, K. (Contributor), Brennan, A. J. (Contributor), Brenner, R. (Contributor), Bressler, S. (Contributor), Bristow, K. (Contributor), Bristow, T. M. (Contributor), Britton, D. (Contributor), Brochu, F. M. (Contributor), Brock, I. (Contributor), Brock, R. (Contributor), Bromberg, C. (Contributor), Bronner, J. (Contributor), Brooijmans, G. (Contributor), Brooks, T. (Contributor), Brooks, W. K. (Contributor), Brosamer, J. (Contributor), Brost, E. (Contributor), Brown, J. (Contributor), Bruckman de Renstrom, D. R. P. A. (Contributor), Bruncko, D. (Contributor), Bruneliere, R. (Contributor), Brunet, S. (Contributor), Bruni, A. (Contributor), Bruni, G. (Contributor), Bruschi, M. (Contributor), Bryngemark, L. (Contributor), Buanes, T. (Contributor), Buat, Q. (Contributor), Bucci, F. (Contributor), Buchholz, P. (Contributor), Buckingham, R. M. (Contributor), Buckley, A. G. (Contributor), Buda, S. I. (Contributor), Budagov, I. A. (Contributor), Buehrer, F. (Contributor), Bugge, L. (Contributor), Bugge, M. K. (Contributor), Bulekov, O. (Contributor), Bundock, A. C. (Contributor), Burckhart, H. (Contributor), Burdin, S. (Contributor), Burghgrave, B. (Contributor), Burke, S. (Contributor), Burmeister, I. (Contributor), Busato, E. (Contributor), Büscher, D. (Contributor), Büscher, V. (Contributor), Bussey, P. (Contributor), Buszello, C. P. (Contributor), Butler, B. (Contributor), Butler, J. M. (Contributor), Butt, A. I. (Contributor), Buttar, C. M. (Contributor), Butterworth, J. M. (Contributor), Butti, P. (Contributor), Buttinger, W. (Contributor), Buzatu, A. (Contributor), Byszewski, M. (Contributor), Cabrera Urbán, U. S. (Contributor), Caforio, D. (Contributor), Cakir, O. (Contributor), Calafiura, P. (Contributor), Calandri, A. (Contributor), Calderini, G. (Contributor), Calfayan, P. (Contributor), Calkins, R. (Contributor), Caloba, L. P. (Contributor), Calvet, D. (Contributor), Calvet, S. (Contributor), Camacho Toro, T. R. (Contributor), Camarda, S. (Contributor), Cameron, D. (Contributor), Caminada, L. M. (Contributor), Caminal Armadans, A. R. (Contributor), Campana, S. (Contributor), Campanelli, M. (Contributor), Campoverde, A. (Contributor), Canale, V. (Contributor), Canepa, A. (Contributor), Cano Bret, B. M. (Contributor), Cantero, J. (Contributor), Cantrill, R. (Contributor), Cao, T. (Contributor), Capeans Garrido, G. M. D. M. (Contributor), Caprini, I. (Contributor), Caprini, M. (Contributor), Capua, M. (Contributor), Caputo, R. (Contributor), Cardarelli, R. (Contributor), Carli, T. (Contributor), Carlino, G. (Contributor), Carminati, L. (Contributor), Caron, S. (Contributor), Carquin, E. (Contributor), Carrillo-Montoya, G. D. (Contributor), Carter, J. R. (Contributor), Carvalho, J. (Contributor), Casadei, D. (Contributor), Casado, M. P. (Contributor), Casolino, M. (Contributor), Castaneda-Miranda, E. (Contributor), Castelli, A. (Contributor), Castillo Gimenez, G. V. (Contributor), Castro, N. F. (Contributor), Catastini, P. (Contributor), Catinaccio, A. (Contributor), Catmore, J. R. (Contributor), Cattai, A. (Contributor), Cattani, G. (Contributor), Caudron, J. (Contributor), Caughron, S. (Contributor), Cavaliere, V. (Contributor), Cavalli, D. (Contributor), Cavalli-Sforza, M. (Contributor), Cavasinni, V. (Contributor), Ceradini, F. (Contributor), Cerio, B. C. (Contributor), Cerny, K. (Contributor), Cerqueira, A. S. (Contributor), Cerri, A. (Contributor), Cerrito, L. (Contributor), Cerutti, F. (Contributor), Cerv, M. (Contributor), Cervelli, A. (Contributor), Cetin, S. A. (Contributor), Chafaq, A. (Contributor), Chakraborty, D. (Contributor), Chalupkova, I. (Contributor), Chang, P. (Contributor), Chapleau, B. (Contributor), Chapman, J. D. (Contributor), Charfeddine, D. (Contributor), Charlton, D. G. (Contributor), Chau, C. C. (Contributor), Chavez Barajas, B. C. A. (Contributor), Cheatham, S. (Contributor), Chegwidden, A. (Contributor), Chekanov, S. (Contributor), Chekulaev, S. V. (Contributor), Chelkov, G. A. (Contributor), Chelstowska, M. A. (Contributor), Chen, C. (Contributor), Chen, H. (Contributor), Chen, K. (Contributor), Chen, L. (Contributor), Chen, S. (Contributor), Chen, X. (Contributor), Chen, Y. (Contributor), Chen, Y. (Contributor), Cheng, H. C. (Contributor), Cheng, Y. (Contributor), Cheplakov, A. (Contributor), Cherkaoui El Moursli, E. M. R. (Contributor), Chernyatin, V. (Contributor), Cheu, E. C. (Contributor), Chevalier, L. (Contributor), Chiarella, V. (Contributor), Chiefari, G. (Contributor), Childers, J. T. (Contributor), Chilingarov, A. (Contributor), Chiodini, G. (Contributor), Chisholm, A. S. (Contributor), Chislett, R. T. (Contributor), Chitan, A. (Contributor), Chizhov, M. V. (Contributor), Chouridou, S. (Contributor), Chow, B. K. B. (Contributor), Chromek-Burckhart, D. (Contributor), Chu, M. L. (Contributor), Chudoba, J. (Contributor), Chwastowski, J. J. (Contributor), Chytka, L. (Contributor), Ciapetti, G. (Contributor), Ciftci, A. K. (Contributor), Ciftci, R. (Contributor), Cinca, D. (Contributor), Cindro, V. (Contributor), Ciocio, A. (Contributor), Cirkovic, P. (Contributor), Citron, Z. H. (Contributor), Citterio, M. (Contributor), Ciubancan, M. (Contributor), Clark, A. (Contributor), Clark, P. J. (Contributor), Clarke, R. N. (Contributor), Cleland, W. (Contributor), Clemens, J. C. (Contributor), Clement, C. (Contributor), Coadou, Y. (Contributor), Cobal, M. (Contributor), Coccaro, A. (Contributor), Cochran, J. (Contributor), Coffey, L. (Contributor), Cogan, J. G. (Contributor), Coggeshall, J. (Contributor), Cole, B. (Contributor), Cole, S. (Contributor), Colijn, A. P. (Contributor), Collot, J. (Contributor), Colombo, T. (Contributor), Colon, G. (Contributor), Compostella, G. (Contributor), Conde Muiño, M. P. (Contributor), Coniavitis, E. (Contributor), Conidi, M. C. (Contributor), Connell, S. H. (Contributor), Connelly, I. A. (Contributor), Consonni, S. M. (Contributor), Consorti, V. (Contributor), Constantinescu, S. (Contributor), Conta, C. (Contributor), Conti, G. (Contributor), Conventi, F. (Contributor), Cooke, M. (Contributor), Cooper, B. D. (Contributor), Cooper-Sarkar, A. M. (Contributor), Cooper-Smith, N. J. (Contributor), Copic, K. (Contributor), Cornelissen, T. (Contributor), Corradi, M. (Contributor), Corriveau, F. (Contributor), Corso-Radu, A. (Contributor), Cortes-Gonzalez, A. (Contributor), Cortiana, G. (Contributor), Costa, G. (Contributor), Costa, M. J. (Contributor), Costanzo, D. (Contributor), Côté, D. (Contributor), Cottin, G. (Contributor), Cowan, G. (Contributor), Cox, B. E. (Contributor), Cranmer, K. (Contributor), Cree, G. (Contributor), Crépé-Renaudin, S. (Contributor), Crescioli, F. (Contributor), Cribbs, W. A. (Contributor), Crispin Ortuzar, O. M. (Contributor), Cristinziani, M. (Contributor), Croft, V. (Contributor), Crosetti, G. (Contributor), Cuciuc, C. (Contributor), Cuhadar Donszelmann, D. T. (Contributor), Cummings, J. (Contributor), Curatolo, M. (Contributor), Cuthbert, C. (Contributor), Czirr, H. (Contributor), Czodrowski, P. (Contributor), Czyczula, Z. (Contributor), D'Auria, S. (Contributor), D'Onofrio, M. (Contributor), Da Cunha Sargedas De Sousa, C. S. D. S. M. J. (Contributor), Da Via, V. C. (Contributor), Dabrowski, W. (Contributor), Dafinca, A. (Contributor), Dai, T. (Contributor), Dale, O. (Contributor), Dallaire, F. (Contributor), Dallapiccola, C. (Contributor), Dam, M. (Contributor), Daniells, A. C. (Contributor), Dano Hoffmann, H. M. (Contributor), Dao, V. (Contributor), Darbo, G. (Contributor), Darmora, S. (Contributor), Dassoulas, J. (Contributor), Dattagupta, A. (Contributor), Davey, W. (Contributor), David, C. (Contributor), Davidek, T. (Contributor), Davies, E. (Contributor), Davies, M. (Contributor), Davignon, O. (Contributor), Davison, A. R. (Contributor), Davison, P. (Contributor), Davygora, Y. (Contributor), Dawe, E. (Contributor), Dawson, I. (Contributor), Daya-Ishmukhametova, R. K. (Contributor), De, K. (Contributor), de Asmundis, A. R. (Contributor), De Castro, C. S. (Contributor), De Cecco, C. S. (Contributor), De Groot, G. N. (Contributor), de Jong, J. P. (Contributor), De la Torre, L. T. H. (Contributor), De Lorenzi, L. F. (Contributor), De Nooij, N. L. (Contributor), De Pedis, P. D. (Contributor), De Salvo, S. A. (Contributor), De Sanctis, S. U. (Contributor), De Santo, S. A. (Contributor), De Vivie De Regie, V. D. R. J. B. (Contributor), Dearnaley, W. J. (Contributor), Debbe, R. (Contributor), Debenedetti, C. (Contributor), Dechenaux, B. (Contributor), Dedovich, D. V. (Contributor), Deigaard, I. (Contributor), Del Peso, P. J. (Contributor), Del Prete, P. T. (Contributor), Deliot, F. (Contributor), Delitzsch, C. M. (Contributor), Deliyergiyev, M. (Contributor), Dell'Acqua, A. (Contributor), Dell'Asta, L. (Contributor), Dell'Orso, M. (Contributor), Della Pietra, P. M. (Contributor), della Volpe, V. D. (Contributor), Delmastro, M. (Contributor), Delsart, P. A. (Contributor), Deluca, C. (Contributor), Demers, S. (Contributor), Demichev, M. (Contributor), Demilly, A. (Contributor), Denisov, S. P. (Contributor), Derendarz, D. (Contributor), Derkaoui, J. E. (Contributor), Derue, F. (Contributor), Dervan, P. (Contributor), Desch, K. (Contributor), Deterre, C. (Contributor), Deviveiros, P. O. (Contributor), Dewhurst, A. (Contributor), Dhaliwal, S. (Contributor), Di Ciaccio, C. A. (Contributor), Di Ciaccio, C. L. (Contributor), Di Domenico, D. A. (Contributor), Di Donato, D. C. (Contributor), Di Girolamo, G. A. (Contributor), Di Girolamo, G. B. (Contributor), Di Mattia, M. A. (Contributor), Di Micco, M. B. (Contributor), Di Nardo, N. R. (Contributor), Di Simone, S. A. (Contributor), Di Sipio, S. R. (Contributor), Di Valentino, V. D. (Contributor), Dias, F. A. (Contributor), Diaz, M. A. (Contributor), Diehl, E. B. (Contributor), Dietrich, J. (Contributor), Dietzsch, T. A. (Contributor), Diglio, S. (Contributor), Dimitrievska, A. (Contributor), Dingfelder, J. (Contributor), Dionisi, C. (Contributor), Dita, P. (Contributor), Dita, S. (Contributor), Dittus, F. (Contributor), Djama, F. (Contributor), Djobava, T. (Contributor), Djuvsland, J. I. (Contributor), do Vale, V. M. A. B. (Contributor), Do Valle Wemans, V. W. A. (Contributor), Doan, T. K. O. (Contributor), Dobos, D. (Contributor), Doglioni, C. (Contributor), Doherty, T. (Contributor), Dohmae, T. (Contributor), Dolejsi, J. (Contributor), Dolezal, Z. (Contributor), Dolgoshein, B. A. (Contributor), Donadelli, M. (Contributor), Donati, S. (Contributor), Dondero, P. (Contributor), Donini, J. (Contributor), Dopke, J. (Contributor), Doria, A. (Contributor), Dova, M. T. (Contributor), Doyle, A. T. (Contributor), Dris, M. (Contributor), Dubbert, J. (Contributor), Dube, S. (Contributor), Dubreuil, E. (Contributor), Duchovni, E. (Contributor), Duckeck, G. (Contributor), Ducu, O. A. (Contributor), Duda, D. (Contributor), Dudarev, A. (Contributor), Dudziak, F. (Contributor), Duflot, L. (Contributor), Duguid, L. (Contributor), Dührssen, M. (Contributor), Dunford, M. (Contributor), Duran Yildiz, Y. H. (Contributor), Düren, M. (Contributor), Durglishvili, A. (Contributor), Dwuznik, M. (Contributor), Dyndal, M. (Contributor), Ebke, J. (Contributor), Edson, W. (Contributor), Edwards, N. C. (Contributor), Ehrenfeld, W. (Contributor), Eifert, T. (Contributor), Eigen, G. (Contributor), Einsweiler, K. (Contributor), Ekelof, T. (Contributor), El Kacimi, K. M. (Contributor), Ellert, M. (Contributor), Elles, S. (Contributor), Ellinghaus, F. (Contributor), Ellis, N. (Contributor), Elmsheuser, J. (Contributor), Elsing, M. (Contributor), Emeliyanov, D. (Contributor), Enari, Y. (Contributor), Endner, O. C. (Contributor), Endo, M. (Contributor), Engelmann, R. (Contributor), Erdmann, J. (Contributor), Ereditato, A. (Contributor), Eriksson, D. (Contributor), Ernis, G. (Contributor), Ernst, J. (Contributor), Ernst, M. (Contributor), Ernwein, J. (Contributor), Errede, D. (Contributor), Errede, S. (Contributor), Ertel, E. (Contributor), Escalier, M. (Contributor), Esch, H. (Contributor), Escobar, C. (Contributor), Esposito, B. (Contributor), Etienvre, A. I. (Contributor), Etzion, E. (Contributor), Evans, H. (Contributor), Ezhilov, A. (Contributor), Fabbri, L. (Contributor), Facini, G. (Contributor), Fakhrutdinov, R. M. (Contributor), Falciano, S. (Contributor), Falla, R. J. (Contributor), Faltova, J. (Contributor), Fang, Y. (Contributor), Fanti, M. (Contributor), Farbin, A. (Contributor), Farilla, A. (Contributor), Farooque, T. (Contributor), Farrell, S. (Contributor), Farrington, S. M. (Contributor), Farthouat, P. (Contributor), Fassi, F. (Contributor), Fassnacht, P. (Contributor), Fassouliotis, D. (Contributor), Favareto, A. (Contributor), Fayard, L. (Contributor), Federic, P. (Contributor), Fedin, O. L. (Contributor), Fedorko, W. (Contributor), Fehling-Kaschek, M. (Contributor), Feigl, S. (Contributor), Feligioni, L. (Contributor), Feng, C. (Contributor), Feng, E. J. (Contributor), Feng, H. (Contributor), Fenyuk, A. B. (Contributor), Fernandez Perez, P. S. (Contributor), Ferrag, S. (Contributor), Ferrando, J. (Contributor), Ferrari, A. (Contributor), Ferrari, P. (Contributor), Ferrari, R. (Contributor), Ferreira de Lima, D. L. D. E. (Contributor), Ferrer, A. (Contributor), Ferrere, D. (Contributor), Ferretti, C. (Contributor), Ferretto Parodi, P. A. (Contributor), Fiascaris, M. (Contributor), Fiedler, F. (Contributor), Filipčič, A. (Contributor), Filipuzzi, M. (Contributor), Filthaut, F. (Contributor), Fincke-Keeler, M. (Contributor), Finelli, K. D. (Contributor), Fiolhais, M. C. N. (Contributor), Fiorini, L. (Contributor), Firan, A. (Contributor), Fischer, A. (Contributor), Fischer, J. (Contributor), Fisher, W. C. (Contributor), Fitzgerald, E. A. (Contributor), Flechl, M. (Contributor), Fleck, I. (Contributor), Fleischmann, P. (Contributor), Fleischmann, S. (Contributor), Fletcher, G. T. (Contributor), Fletcher, G. (Contributor), Flick, T. (Contributor), Floderus, A. (Contributor), Flores Castillo, C. L. R. (Contributor), Florez Bustos, B. A. C. (Contributor), Flowerdew, M. J. (Contributor), Formica, A. (Contributor), Forti, A. (Contributor), Fortin, D. (Contributor), Fournier, D. (Contributor), Fox, H. (Contributor), Fracchia, S. (Contributor), Francavilla, P. (Contributor), Franchini, M. (Contributor), Franchino, S. (Contributor), Francis, D. (Contributor), Franconi, L. (Contributor), Franklin, M. (Contributor), Franz, S. (Contributor), Fraternali, M. (Contributor), French, S. T. (Contributor), Friedrich, C. (Contributor), Friedrich, F. (Contributor), Froidevaux, D. (Contributor), Frost, J. A. (Contributor), Fukunaga, C. (Contributor), Fullana Torregrosa, T. E. (Contributor), Fulsom, B. G. (Contributor), Fuster, J. (Contributor), Gabaldon, C. (Contributor), Gabizon, O. (Contributor), Gabrielli, A. (Contributor), Gabrielli, A. (Contributor), Gadatsch, S. (Contributor), Gadomski, S. (Contributor), Gagliardi, G. (Contributor), Gagnon, P. (Contributor), Galea, C. (Contributor), Galhardo, B. (Contributor), Gallas, E. J. (Contributor), Gallo, V. (Contributor), Gallop, B. J. (Contributor), Gallus, P. (Contributor), Galster, G. (Contributor), Gan, K. K. (Contributor), Gandrajula, R. P. (Contributor), Gao, J. (Contributor), Gao, Y. S. (Contributor), Garay Walls, W. F. M. (Contributor), Garberson, F. (Contributor), García, C. (Contributor), García Navarro, N. J. E. (Contributor), Garcia-Sciveres, M. (Contributor), Gardner, R. W. (Contributor), Garelli, N. (Contributor), Garonne, V. (Contributor), Gatti, C. (Contributor), Gaudio, G. (Contributor), Gaur, B. (Contributor), Gauthier, L. (Contributor), Gauzzi, P. (Contributor), Gavrilenko, I. L. (Contributor), Gay, C. (Contributor), Gaycken, G. (Contributor), Gazis, E. N. (Contributor), Ge, P. (Contributor), Gecse, Z. (Contributor), Gee, C. N. P. (Contributor), Geerts, D. A. A. (Contributor), Geich-Gimbel, C. (Contributor), Gellerstedt, K. (Contributor), Gemme, C. (Contributor), Gemmell, A. (Contributor), Genest, M. H. (Contributor), Gentile, S. (Contributor), George, M. (Contributor), George, S. (Contributor), Gerbaudo, D. (Contributor), Gershon, A. (Contributor), Ghazlane, H. (Contributor), Ghodbane, N. (Contributor), Giacobbe, B. (Contributor), Giagu, S. (Contributor), Giangiobbe, V. (Contributor), Giannetti, P. (Contributor), Gianotti, F. (Contributor), Gibbard, B. (Contributor), Gibson, S. M. (Contributor), Gilchriese, M. (Contributor), Gillam, T. P. S. (Contributor), Gillberg, D. (Contributor), Gilles, G. (Contributor), Gingrich, D. M. (Contributor), Giokaris, N. (Contributor), Giordani, M. P. (Contributor), Giordano, R. (Contributor), Giorgi, F. M. (Contributor), Giorgi, F. M. (Contributor), Giraud, P. F. (Contributor), Giugni, D. (Contributor), Giuliani, C. (Contributor), Giulini, M. (Contributor), Gjelsten, B. K. (Contributor), Gkaitatzis, S. (Contributor), Gkialas, I. (Contributor), Gladilin, L. K. (Contributor), Glasman, C. (Contributor), Glatzer, J. (Contributor), Glaysher, P. C. F. (Contributor), Glazov, A. (Contributor), Glonti, G. L. (Contributor), Goblirsch-Kolb, M. (Contributor), Goddard, J. R. (Contributor), Godfrey, J. (Contributor), Godlewski, J. (Contributor), Goeringer, C. (Contributor), Goldfarb, S. (Contributor), Golling, T. (Contributor), Golubkov, D. (Contributor), Gomes, A. (Contributor), Gomez Fajardo, F. L. S. (Contributor), Gonçalo, R. (Contributor), Goncalves Pinto Firmino Da Costa, P. F. D. C. J. (Contributor), Gonella, L. (Contributor), González de la Hoz, D. L. H. S. (Contributor), Gonzalez Parra, P. G. (Contributor), Gonzalez-Sevilla, S. (Contributor), Goossens, L. (Contributor), Gorbounov, P. A. (Contributor), Gordon, H. A. (Contributor), Gorelov, I. (Contributor), Gorini, B. (Contributor), Gorini, E. (Contributor), Gorišek, A. (Contributor), Gornicki, E. (Contributor), Goshaw, A. T. (Contributor), Gössling, C. (Contributor), Gostkin, M. I. (Contributor), Gouighri, M. (Contributor), Goujdami, D. (Contributor), Goulette, M. P. (Contributor), Goussiou, A. G. (Contributor), Goy, C. (Contributor), Gozpinar, S. (Contributor), Grabas, H. M. X. (Contributor), Graber, L. (Contributor), Grabowska-Bold, I. (Contributor), Grafström, P. (Contributor), Grahn, K. (Contributor), Gramling, J. (Contributor), Gramstad, E. (Contributor), Grancagnolo, S. (Contributor), Grassi, V. (Contributor), Gratchev, V. (Contributor), Gray, H. M. (Contributor), Graziani, E. (Contributor), Grebenyuk, O. G. (Contributor), Greenwood, Z. D. (Contributor), Gregersen, K. (Contributor), Gregor, I. M. (Contributor), Grenier, P. (Contributor), Griffiths, J. (Contributor), Grillo, A. A. (Contributor), Grimm, K. (Contributor), Grinstein, S. (Contributor), Gris, P. (Contributor), Grishkevich, Y. V. (Contributor), Grivaz, J. (Contributor), Grohs, J. P. (Contributor), Grohsjean, A. (Contributor), Gross, E. (Contributor), Grosse-Knetter, J. (Contributor), Grossi, G. C. (Contributor), Groth-Jensen, J. (Contributor), Grout, Z. J. (Contributor), Guan, L. (Contributor), Guenther, J. (Contributor), Guescini, F. (Contributor), Guest, D. (Contributor), Gueta, O. (Contributor), Guicheney, C. (Contributor), Guido, E. (Contributor), Guillemin, T. (Contributor), Guindon, S. (Contributor), Gul, U. (Contributor), Gumpert, C. (Contributor), Guo, J. (Contributor), Gupta, S. (Contributor), Gutierrez, P. (Contributor), Gutierrez Ortiz, O. N. G. (Contributor), Gutschow, C. (Contributor), Guttman, N. (Contributor), Guyot, C. (Contributor), Gwenlan, C. (Contributor), Gwilliam, C. B. (Contributor), Haas, A. (Contributor), Haber, C. (Contributor), Hadavand, H. K. (Contributor), Haddad, N. (Contributor), Haefner, P. (Contributor), Hageböck, S. (Contributor), Hajduk, Z. (Contributor), Hakobyan, H. (Contributor), Haleem, M. (Contributor), Hall, D. (Contributor), Halladjian, G. (Contributor), Hamacher, K. (Contributor), Hamal, P. (Contributor), Hamano, K. (Contributor), Hamer, M. (Contributor), Hamilton, A. (Contributor), Hamilton, S. (Contributor), Hamity, G. N. (Contributor), Hamnett, P. G. (Contributor), Han, L. (Contributor), Hanagaki, K. (Contributor), Hanawa, K. (Contributor), Hance, M. (Contributor), Hanke, P. (Contributor), Hanna, R. (Contributor), Hansen, J. B. (Contributor), Hansen, J. D. (Contributor), Hansen, P. H. (Contributor), Hara, K. (Contributor), Hard, A. S. (Contributor), Harenberg, T. (Contributor), Hariri, F. (Contributor), Harkusha, S. (Contributor), Harper, D. (Contributor), Harrington, R. D. (Contributor), Harris, O. M. (Contributor), Harrison, P. F. (Contributor), Hartjes, F. (Contributor), Hasegawa, M. (Contributor), Hasegawa, S. (Contributor), Hasegawa, Y. (Contributor), Hasib, A. (Contributor), Hassani, S. (Contributor), Haug, S. (Contributor), Hauschild, M. (Contributor), Hauser, R. (Contributor), Havranek, M. (Contributor), Hawkes, C. M. (Contributor), Hawkings, R. J. (Contributor), Hawkins, A. D. (Contributor), Hayashi, T. (Contributor), Hayden, D. (Contributor), Hays, C. P. (Contributor), Hayward, H. S. (Contributor), Haywood, S. J. (Contributor), Head, S. J. (Contributor), Heck, T. (Contributor), Hedberg, V. (Contributor), Heelan, L. (Contributor), Heim, S. (Contributor), Heim, T. (Contributor), Heinemann, B. (Contributor), Heinrich, L. (Contributor), Hejbal, J. (Contributor), Helary, L. (Contributor), Heller, C. (Contributor), Heller, M. (Contributor), Hellman, S. (Contributor), Hellmich, D. (Contributor), Helsens, C. (Contributor), Henderson, J. (Contributor), Henderson, R. C. W. (Contributor), Heng, Y. (Contributor), Hengler, C. (Contributor), Henrichs, A. (Contributor), Henriques Correia, C. A. M. (Contributor), Henrot-Versille, S. (Contributor), Hensel, C. (Contributor), Herbert, G. H. (Contributor), Hernández Jiménez, J. Y. (Contributor), Herrberg-Schubert, R. (Contributor), Herten, G. (Contributor), Hertenberger, R. (Contributor), Hervas, L. (Contributor), Hesketh, G. G. (Contributor), Hessey, N. P. (Contributor), Hickling, R. (Contributor), Higón-Rodriguez, E. (Contributor), Hill, E. (Contributor), Hill, J. C. (Contributor), Hiller, K. H. (Contributor), Hillert, S. (Contributor), Hillier, S. J. (Contributor), Hinchliffe, I. (Contributor), Hines, E. (Contributor), Hirose, M. (Contributor), Hirschbuehl, D. (Contributor), Hobbs, J. (Contributor), Hod, N. (Contributor), Hodgkinson, M. C. (Contributor), Hodgson, P. (Contributor), Hoecker, A. (Contributor), Hoeferkamp, M. R. (Contributor), Hoenig, F. (Contributor), Hoffman, J. (Contributor), Hoffmann, D. (Contributor), Hohlfeld, M. (Contributor), Holmes, T. R. (Contributor), Hong, T. M. (Contributor), Hooft van Huysduynen, V. H. L. (Contributor), Hostachy, J. (Contributor), Hou, S. (Contributor), Hoummada, A. (Contributor), Howard, J. (Contributor), Howarth, J. (Contributor), Hrabovsky, M. (Contributor), Hristova, I. (Contributor), Hrivnac, J. (Contributor), Hryn'ova, T. (Contributor), Hsu, C. (Contributor), Hsu, P. J. (Contributor), Hsu, S. (Contributor), Hu, D. (Contributor), Hu, X. (Contributor), Huang, Y. (Contributor), Hubacek, Z. (Contributor), Hubaut, F. (Contributor), Huegging, F. (Contributor), Huffman, T. B. (Contributor), Hughes, E. W. (Contributor), Hughes, G. (Contributor), Huhtinen, M. (Contributor), Hülsing, T. A. (Contributor), Hurwitz, M. (Contributor), Huseynov, N. (Contributor), Huston, J. (Contributor), Huth, J. (Contributor), Iacobucci, G. (Contributor), Iakovidis, G. (Contributor), Ibragimov, I. (Contributor), Iconomidou-Fayard, L. (Contributor), Ideal, E. (Contributor), Iengo, P. (Contributor), Igonkina, O. (Contributor), Iizawa, T. (Contributor), Ikegami, Y. (Contributor), Ikematsu, K. (Contributor), Ikeno, M. (Contributor), Ilchenko, Y. (Contributor), Iliadis, D. (Contributor), Ilic, N. (Contributor), Inamaru, Y. (Contributor), Ince, T. (Contributor), Ioannou, P. (Contributor), Iodice, M. (Contributor), Iordanidou, K. (Contributor), Ippolito, V. (Contributor), Irles Quiles, Q. A. (Contributor), Isaksson, C. (Contributor), Ishino, M. (Contributor), Ishitsuka, M. (Contributor), Ishmukhametov, R. (Contributor), Issever, C. (Contributor), Istin, S. (Contributor), Iturbe Ponce, P. J. M. (Contributor), Iuppa, R. (Contributor), Ivarsson, J. (Contributor), Iwanski, W. (Contributor), Iwasaki, H. (Contributor), Izen, J. M. (Contributor), Izzo, V. (Contributor), Jackson, B. (Contributor), Jackson, M. (Contributor), Jackson, P. (Contributor), Jaekel, M. R. (Contributor), Jain, V. (Contributor), Jakobs, K. (Contributor), Jakobsen, S. (Contributor), Jakoubek, T. (Contributor), Jakubek, J. (Contributor), Jamin, D. O. (Contributor), Jana, D. K. (Contributor), Jansen, E. (Contributor), Jansen, H. (Contributor), Janssen, J. (Contributor), Janus, M. (Contributor), Jarlskog, G. (Contributor), Javadov, N. (Contributor), Javůrek, T. (Contributor), Jeanty, L. (Contributor), Jejelava, J. (Contributor), Jeng, G. (Contributor), Jennens, D. (Contributor), Jenni, P. (Contributor), Jentzsch, J. (Contributor), Jeske, C. (Contributor), Jézéquel, S. (Contributor), Ji, H. (Contributor), Jia, J. (Contributor), Jiang, Y. (Contributor), Jimenez Belenguer, B. M. (Contributor), Jin, S. (Contributor), Jinaru, A. (Contributor), Jinnouchi, O. (Contributor), Joergensen, M. D. (Contributor), Johansson, K. E. (Contributor), Johansson, P. (Contributor), Johns, K. A. (Contributor), Jon-And, K. (Contributor), Jones, G. (Contributor), Jones, R. W. L. (Contributor), Jones, T. J. (Contributor), Jongmanns, J. (Contributor), Jorge, P. M. (Contributor), Joshi, K. D. (Contributor), Jovicevic, J. (Contributor), Ju, X. (Contributor), Jung, C. A. (Contributor), Jungst, R. M. (Contributor), Jussel, P. (Contributor), Juste Rozas, R. A. (Contributor), Kaci, M. (Contributor), Kaczmarska, A. (Contributor), Kado, M. (Contributor), Kagan, H. (Contributor), Kagan, M. (Contributor), Kajomovitz, E. (Contributor), Kalderon, C. W. (Contributor), Kama, S. (Contributor), Kamenshchikov, A. (Contributor), Kanaya, N. (Contributor), Kaneda, M. (Contributor), Kaneti, S. (Contributor), Kantserov, V. A. (Contributor), Kanzaki, J. (Contributor), Kaplan, B. (Contributor), Kapliy, A. (Contributor), Kar, D. (Contributor), Karakostas, K. (Contributor), Karastathis, N. (Contributor), Karnevskiy, M. (Contributor), Karpov, S. N. (Contributor), Karpova, Z. M. (Contributor), Karthik, K. (Contributor), Kartvelishvili, V. (Contributor), Karyukhin, A. N. (Contributor), Kashif, L. (Contributor), Kasieczka, G. (Contributor), Kass, R. D. (Contributor), Kastanas, A. (Contributor), Kataoka, Y. (Contributor), Katre, A. (Contributor), Katzy, J. (Contributor), Kaushik, V. (Contributor), Kawagoe, K. (Contributor), Kawamoto, T. (Contributor), Kawamura, G. (Contributor), Kazama, S. (Contributor), Kazanin, V. F. (Contributor), Kazarinov, M. Y. (Contributor), Keeler, R. (Contributor), Kehoe, R. (Contributor), Keil, M. (Contributor), Keller, J. S. (Contributor), Kempster, J. J. (Contributor), Keoshkerian, H. (Contributor), Kepka, O. (Contributor), Kerševan, B. P. (Contributor), Kersten, S. (Contributor), Kessoku, K. (Contributor), Keung, J. (Contributor), Khalil-zada, F. (Contributor), Khandanyan, H. (Contributor), Khanov, A. (Contributor), Khodinov, A. (Contributor), Khomich, A. (Contributor), Khoo, T. J. (Contributor), Khoriauli, G. (Contributor), Khoroshilov, A. (Contributor), Khovanskiy, V. (Contributor), Khramov, E. (Contributor), Khubua, J. (Contributor), Kim, H. Y. (Contributor), Kim, H. (Contributor), Kim, S. H. (Contributor), Kimura, N. (Contributor), Kind, O. (Contributor), King, B. T. (Contributor), King, M. (Contributor), King, R. S. B. (Contributor), King, S. B. (Contributor), Kirk, J. (Contributor), Kiryunin, A. E. (Contributor), Kishimoto, T. (Contributor), Kisielewska, D. (Contributor), Kiss, F. (Contributor), Kittelmann, T. (Contributor), Kiuchi, K. (Contributor), Kladiva, E. (Contributor), Klein, M. (Contributor), Klein, U. (Contributor), Kleinknecht, K. (Contributor), Klimek, P. (Contributor), Klimentov, A. (Contributor), Klingenberg, R. (Contributor), Klinger, J. A. (Contributor), Klioutchnikova, T. (Contributor), Klok, P. F. (Contributor), Kluge, E. (Contributor), Kluit, P. (Contributor), Kluth, S. (Contributor), Kneringer, E. (Contributor), Knoops, E. B. F. G. (Contributor), Knue, A. (Contributor), Kobayashi, D. (Contributor), Kobayashi, T. (Contributor), Kobel, M. (Contributor), Kocian, M. (Contributor), Kodys, P. (Contributor), Koevesarki, P. (Contributor), Koffas, T. (Contributor), Koffeman, E. (Contributor), Kogan, L. A. (Contributor), Kohlmann, S. (Contributor), Kohout, Z. (Contributor), Kohriki, T. (Contributor), Koi, T. (Contributor), Kolanoski, H. (Contributor), Koletsou, I. (Contributor), Koll, J. (Contributor), Komar, A. A. (Contributor), Komori, Y. (Contributor), Kondo, T. (Contributor), Kondrashova, N. (Contributor), Köneke, K. (Contributor), König, A. C. (Contributor), König, S. (Contributor), Kono, T. (Contributor), Konoplich, R. (Contributor), Konstantinidis, N. (Contributor), Kopeliansky, R. (Contributor), Koperny, S. (Contributor), Köpke, L. (Contributor), Kopp, A. K. (Contributor), Korcyl, K. (Contributor), Kordas, K. (Contributor), Korn, A. (Contributor), Korol, A. A. (Contributor), Korolkov, I. (Contributor), Korolkova, E. V. (Contributor), Korotkov, V. A. (Contributor), Kortner, O. (Contributor), Kortner, S. (Contributor), Kostyukhin, V. V. (Contributor), Kotov, V. M. (Contributor), Kotwal, A. (Contributor), Kourkoumelis, C. (Contributor), Kouskoura, V. (Contributor), Koutsman, A. (Contributor), Kowalewski, R. (Contributor), Kowalski, T. Z. (Contributor), Kozanecki, W. (Contributor), Kozhin, A. S. (Contributor), Kral, V. (Contributor), Kramarenko, V. A. (Contributor), Kramberger, G. (Contributor), Krasnopevtsev, D. (Contributor), Krasny, M. W. (Contributor), Krasznahorkay, A. (Contributor), Kraus, J. K. (Contributor), Kravchenko, A. (Contributor), Kreiss, S. (Contributor), Kretz, M. (Contributor), Kretzschmar, J. (Contributor), Kreutzfeldt, K. (Contributor), Krieger, P. (Contributor), Kroeninger, K. (Contributor), Kroha, H. (Contributor), Kroll, J. (Contributor), Kroseberg, J. (Contributor), Krstic, J. (Contributor), Kruchonak, U. (Contributor), Krüger, H. (Contributor), Kruker, T. (Contributor), Krumnack, N. (Contributor), Krumshteyn, Z. V. (Contributor), Kruse, A. (Contributor), Kruse, M. C. (Contributor), Kruskal, M. (Contributor), Kubota, T. (Contributor), Kuday, S. (Contributor), Kuehn, S. (Contributor), Kugel, A. (Contributor), Kuhl, A. (Contributor), Kuhl, T. (Contributor), Kukhtin, V. (Contributor), Kulchitsky, Y. (Contributor), Kuleshov, S. (Contributor), Kuna, M. (Contributor), Kunkle, J. (Contributor), Kupco, A. (Contributor), Kurashige, H. (Contributor), Kurochkin, Y. A. (Contributor), Kurumida, R. (Contributor), Kus, V. (Contributor), Kuwertz, E. S. (Contributor), Kuze, M. (Contributor), Kvita, J. (Contributor), La Rosa, R. A. (Contributor), La Rotonda, R. L. (Contributor), Lacasta, C. (Contributor), Lacava, F. (Contributor), Lacey, J. (Contributor), Lacker, H. (Contributor), Lacour, D. (Contributor), Lacuesta, V. R. (Contributor), Ladygin, E. (Contributor), Lafaye, R. (Contributor), Laforge, B. (Contributor), Lagouri, T. (Contributor), Lai, S. (Contributor), Laier, H. (Contributor), Lambourne, L. (Contributor), Lammers, S. (Contributor), Lampen, C. L. (Contributor), Lampl, W. (Contributor), Lançon, E. (Contributor), Landgraf, U. (Contributor), Landon, M. P. J. (Contributor), Lang, V. S. (Contributor), Lankford, A. J. (Contributor), Lanni, F. (Contributor), Lantzsch, K. (Contributor), Laplace, S. (Contributor), Lapoire, C. (Contributor), Laporte, J. F. (Contributor), Lari, T. (Contributor), Lassnig, M. (Contributor), Laurelli, P. (Contributor), Lavrijsen, W. (Contributor), Law, A. T. (Contributor), Laycock, P. (Contributor), Le Dortz, D. O. (Contributor), Le Guirriec, G. E. (Contributor), Le Menedeu, M. E. (Contributor), LeCompte, T. (Contributor), Ledroit-Guillon, F. (Contributor), Lee, C. A. (Contributor), Lee, H. (Contributor), Lee, J. S. H. (Contributor), Lee, S. C. (Contributor), Lee, L. (Contributor), Lefebvre, G. (Contributor), Lefebvre, M. (Contributor), Legger, F. (Contributor), Leggett, C. (Contributor), Lehan, A. (Contributor), Lehmacher, M. (Contributor), Lehmann Miotto, M. G. (Contributor), Lei, X. (Contributor), Leight, W. A. (Contributor), Leisos, A. (Contributor), Leister, A. G. (Contributor), Leite, M. A. L. (Contributor), Leitner, R. (Contributor), Lellouch, D. (Contributor), Lemmer, B. (Contributor), Leney, K. J. C. (Contributor), Lenz, T. (Contributor), Lenzen, G. (Contributor), Lenzi, B. (Contributor), Leone, R. (Contributor), Leone, S. (Contributor), Leonhardt, K. (Contributor), Leonidopoulos, C. (Contributor), Leontsinis, S. (Contributor), Leroy, C. (Contributor), Lester, C. G. (Contributor), Lester, C. M. (Contributor), Levchenko, M. (Contributor), Levêque, J. (Contributor), Levin, D. (Contributor), Levinson, L. J. (Contributor), Levy, M. (Contributor), Lewis, A. (Contributor), Lewis, G. H. (Contributor), Leyko, A. M. (Contributor), Leyton, M. (Contributor), Li, B. (Contributor), Li, B. (Contributor), Li, H. (Contributor), Li, H. L. (Contributor), Li, L. (Contributor), Li, L. (Contributor), Li, S. (Contributor), Li, Y. (Contributor), Liang, Z. (Contributor), Liao, H. (Contributor), Liberti, B. (Contributor), Lichard, P. (Contributor), Lie, K. (Contributor), Liebal, J. (Contributor), Liebig, W. (Contributor), Limbach, C. (Contributor), Limosani, A. (Contributor), Lin, S. C. (Contributor), Lin, T. H. (Contributor), Linde, F. (Contributor), Lindquist, B. E. (Contributor), Linnemann, J. T. (Contributor), Lipeles, E. (Contributor), Lipniacka, A. (Contributor), Lisovyi, M. (Contributor), Liss, T. M. (Contributor), Lissauer, D. (Contributor), Lister, A. (Contributor), Litke, A. M. (Contributor), Liu, B. (Contributor), Liu, D. (Contributor), Liu, J. B. (Contributor), Liu, K. (Contributor), Liu, L. (Contributor), Liu, M. (Contributor), Liu, M. (Contributor), Liu, Y. (Contributor), Livan, M. (Contributor), Livermore, S. S. A. (Contributor), Lleres, A. (Contributor), Llorente Merino, M. J. (Contributor), Lloyd, S. L. (Contributor), Lo Sterzo, S. F. (Contributor), Lobodzinska, E. (Contributor), Loch, P. (Contributor), Lockman, W. S. (Contributor), Loebinger, F. K. (Contributor), Loevschall-Jensen, A. E. (Contributor), Loginov, A. (Contributor), Lohse, T. (Contributor), Lohwasser, K. (Contributor), Lokajicek, M. (Contributor), Lombardo, V. P. (Contributor), Long, B. A. (Contributor), Long, J. D. (Contributor), Long, R. E. (Contributor), Lopes, L. (Contributor), Lopez Mateos, M. D. (Contributor), Lopez Paredes, P. B. (Contributor), Lopez Paz, P. I. (Contributor), Lorenz, J. (Contributor), Lorenzo Martinez, M. N. (Contributor), Losada, M. (Contributor), Loscutoff, P. (Contributor), Lou, X. (Contributor), Lounis, A. (Contributor), Love, J. (Contributor), Love, P. A. (Contributor), Lowe, A. J. (Contributor), Lu, F. (Contributor), Lu, N. (Contributor), Lubatti, H. J. (Contributor), Luci, C. (Contributor), Lucotte, A. (Contributor), Luehring, F. (Contributor), Lukas, W. (Contributor), Luminari, L. (Contributor), Lundberg, O. (Contributor), Lund-Jensen, B. (Contributor), Lungwitz, M. (Contributor), Lynn, D. (Contributor), Lysak, R. (Contributor), Lytken, E. (Contributor), Ma, H. (Contributor), Ma, L. L. (Contributor), Maccarrone, G. (Contributor), Macchiolo, A. (Contributor), Machado Miguens, M. J. (Contributor), Macina, D. (Contributor), Madaffari, D. (Contributor), Madar, R. (Contributor), Maddocks, H. J. (Contributor), Mader, W. F. (Contributor), Madsen, A. (Contributor), Maeno, M. (Contributor), Maeno, T. (Contributor), Magradze, E. (Contributor), Mahboubi, K. (Contributor), Mahlstedt, J. (Contributor), Mahmoud, S. (Contributor), Maiani, C. (Contributor), Maidantchik, C. (Contributor), Maier, A. A. (Contributor), Maio, A. (Contributor), Majewski, S. (Contributor), Makida, Y. (Contributor), Makovec, N. (Contributor), Mal, P. (Contributor), Malaescu, B. (Contributor), Malecki, P. (Contributor), Maleev, V. P. (Contributor), Malek, F. (Contributor), Mallik, U. (Contributor), Malon, D. (Contributor), Malone, C. (Contributor), Maltezos, S. (Contributor), Malyshev, V. M. (Contributor), Malyukov, S. (Contributor), Mamuzic, J. (Contributor), Mandelli, B. (Contributor), Mandelli, L. (Contributor), Mandić, I. (Contributor), Mandrysch, R. (Contributor), Maneira, J. (Contributor), Manfredini, A. (Contributor), Manhaes de Andrade Filho, D. A. F. L. (Contributor), Manjarres Ramos, R. J. (Contributor), Mann, A. (Contributor), Manning, P. M. (Contributor), Manousakis-Katsikakis, A. (Contributor), Mansoulie, B. (Contributor), Mantifel, R. (Contributor), Mapelli, L. (Contributor), March, L. (Contributor), Marchand, J. F. (Contributor), Marchiori, G. (Contributor), Marcisovsky, M. (Contributor), Marino, C. P. (Contributor), Marjanovic, M. (Contributor), Marques, C. N. (Contributor), Marroquim, F. (Contributor), Marsden, S. P. (Contributor), Marshall, Z. (Contributor), Marti, L. F. (Contributor), Marti-Garcia, S. (Contributor), Martin, B. (Contributor), Martin, B. (Contributor), Martin, T. A. (Contributor), Martin, V. J. (Contributor), Martin dit Latour, D. L. B. (Contributor), Martinez, H. (Contributor), Martinez, M. (Contributor), Martin-Haugh, S. (Contributor), Martyniuk, A. C. (Contributor), Marx, M. (Contributor), Marzano, F. (Contributor), Marzin, A. (Contributor), Masetti, L. (Contributor), Mashimo, T. (Contributor), Mashinistov, R. (Contributor), Masik, J. (Contributor), Maslennikov, A. L. (Contributor), Massa, I. (Contributor), Massa, L. (Contributor), Massol, N. (Contributor), Mastrandrea, P. (Contributor), Mastroberardino, A. (Contributor), Masubuchi, T. (Contributor), Mättig, P. (Contributor), Mattmann, J. (Contributor), Maurer, J. (Contributor), Maxfield, S. J. (Contributor), Maximov, D. A. (Contributor), Mazini, R. (Contributor), Mazzaferro, L. (Contributor), Mc Goldrick, G. G. (Contributor), Mc Kee, K. S. P. (Contributor), McCarn, A. (Contributor), McCarthy, R. L. (Contributor), McCarthy, T. G. (Contributor), McCubbin, N. A. (Contributor), McFarlane, K. W. (Contributor), Mcfayden, J. A. (Contributor), Mchedlidze, G. (Contributor), McMahon, S. J. (Contributor), McPherson, R. A. (Contributor), Meade, A. (Contributor), Mechnich, J. (Contributor), Medinnis, M. (Contributor), Meehan, S. (Contributor), Mehlhase, S. (Contributor), Mehta, A. (Contributor), Meier, K. (Contributor), Meineck, C. (Contributor), Meirose, B. (Contributor), Melachrinos, C. (Contributor), Mellado Garcia, G. B. R. (Contributor), Meloni, F. (Contributor), Mengarelli, A. (Contributor), Menke, S. (Contributor), Meoni, E. (Contributor), Mercurio, K. M. (Contributor), Mergelmeyer, S. (Contributor), Meric, N. (Contributor), Mermod, P. (Contributor), Merola, L. (Contributor), Meroni, C. (Contributor), Merritt, F. S. (Contributor), Merritt, H. (Contributor), Messina, A. (Contributor), Metcalfe, J. (Contributor), Mete, A. S. (Contributor), Meyer, C. (Contributor), Meyer, C. (Contributor), Meyer, J. (Contributor), Meyer, J. (Contributor), Middleton, R. P. (Contributor), Migas, S. (Contributor), Mijović, L. (Contributor), Mikenberg, G. (Contributor), Mikestikova, M. (Contributor), Mikuž, M. (Contributor), Milic, A. (Contributor), Miller, D. W. (Contributor), Mills, C. (Contributor), Milov, A. (Contributor), Milstead, D. A. (Contributor), Milstein, D. (Contributor), Minaenko, A. A. (Contributor), Minashvili, I. A. (Contributor), Mincer, A. I. (Contributor), Mindur, B. (Contributor), Mineev, M. (Contributor), Ming, Y. (Contributor), Mir, L. M. (Contributor), Mirabelli, G. (Contributor), Mitani, T. (Contributor), Mitrevski, J. (Contributor), Mitsou, V. A. (Contributor), Mitsui, S. (Contributor), Miucci, A. (Contributor), Miyagawa, P. S. (Contributor), Mjörnmark, J. U. (Contributor), Moa, T. (Contributor), Mochizuki, K. (Contributor), Mohapatra, S. (Contributor), Mohr, W. (Contributor), Molander, S. (Contributor), Moles-Valls, R. (Contributor), Mönig, K. (Contributor), Monini, C. (Contributor), Monk, J. (Contributor), Monnier, E. (Contributor), Montejo Berlingen, B. J. (Contributor), Monticelli, F. (Contributor), Monzani, S. (Contributor), Moore, R. W. (Contributor), Moraes, A. (Contributor), Morange, N. (Contributor), Moreno, D. (Contributor), Moreno Llácer, L. M. (Contributor), Morettini, P. (Contributor), Morgenstern, M. (Contributor), Morii, M. (Contributor), Moritz, S. (Contributor), Morley, A. K. (Contributor), Mornacchi, G. (Contributor), Morris, J. D. (Contributor), Morvaj, L. (Contributor), Moser, H. G. (Contributor), Mosidze, M. (Contributor), Moss, J. (Contributor), Motohashi, K. (Contributor), Mount, R. (Contributor), Mountricha, E. (Contributor), Mouraviev, S. V. (Contributor), Moyse, E. J. W. (Contributor), Muanza, S. (Contributor), Mudd, R. D. (Contributor), Mueller, F. (Contributor), Mueller, J. (Contributor), Mueller, K. (Contributor), Mueller, T. (Contributor), Mueller, T. (Contributor), Muenstermann, D. (Contributor), Munwes, Y. (Contributor), Murillo Quijada, Q. J. A. (Contributor), Murray, W. J. (Contributor), Musheghyan, H. (Contributor), Musto, E. (Contributor), Myagkov, A. G. (Contributor), Myska, M. (Contributor), Nackenhorst, O. (Contributor), Nadal, J. (Contributor), Nagai, K. (Contributor), Nagai, R. (Contributor), Nagai, Y. (Contributor), Nagano, K. (Contributor), Nagarkar, A. (Contributor), Nagasaka, Y. (Contributor), Nagel, M. (Contributor), Nairz, A. M. (Contributor), Nakahama, Y. (Contributor), Nakamura, K. (Contributor), Nakamura, T. (Contributor), Nakano, I. (Contributor), Namasivayam, H. (Contributor), Nanava, G. (Contributor), Narayan, R. (Contributor), Nattermann, T. (Contributor), Naumann, T. (Contributor), Navarro, G. (Contributor), Nayyar, R. (Contributor), Neal, H. A. (Contributor), Nechaeva, P. Y. (Contributor), Neep, T. J. (Contributor), Nef, P. D. (Contributor), Negri, A. (Contributor), Negri, G. (Contributor), Negrini, M. (Contributor), Nektarijevic, S. (Contributor), Nelson, A. (Contributor), Nelson, T. K. (Contributor), Nemecek, S. (Contributor), Nemethy, P. (Contributor), Nepomuceno, A. A. (Contributor), Nessi, M. (Contributor), Neubauer, M. S. (Contributor), Neumann, M. (Contributor), Neves, R. M. (Contributor), Nevski, P. (Contributor), Newman, P. R. (Contributor), Nguyen, D. H. (Contributor), Nickerson, R. B. (Contributor), Nicolaidou, R. (Contributor), Nicquevert, B. (Contributor), Nielsen, J. (Contributor), Nikiforou, N. (Contributor), Nikiforov, A. (Contributor), Nikolaenko, V. (Contributor), Nikolic-Audit, I. (Contributor), Nikolics, K. (Contributor), Nikolopoulos, K. (Contributor), Nilsson, P. (Contributor), Ninomiya, Y. (Contributor), Nisati, A. (Contributor), Nisius, R. (Contributor), Nobe, T. (Contributor), Nodulman, L. (Contributor), Nomachi, M. (Contributor), Nomidis, I. (Contributor), Norberg, S. (Contributor), Nordberg, M. (Contributor), Novgorodova, O. (Contributor), Nowak, S. (Contributor), Nozaki, M. (Contributor), Nozka, L. (Contributor), Ntekas, K. (Contributor), Nunes Hanninger, H. G. (Contributor), Nunnemann, T. (Contributor), Nurse, E. (Contributor), Nuti, F. (Contributor), O'Brien, B. J. (Contributor), O'grady, F. (Contributor), O'Neil, D. C. (Contributor), O'Shea, V. (Contributor), Oakham, F. G. (Contributor), Oberlack, H. (Contributor), Obermann, T. (Contributor), Ocariz, J. (Contributor), Ochi, A. (Contributor), Ochoa, I. (Contributor), Oda, S. (Contributor), Odaka, S. (Contributor), Ogren, H. (Contributor), Oh, A. (Contributor), Oh, S. H. (Contributor), Ohm, C. C. (Contributor), Ohman, H. (Contributor), Okamura, W. (Contributor), Okawa, H. (Contributor), Okumura, Y. (Contributor), Okuyama, T. (Contributor), Olariu, A. (Contributor), Olchevski, A. G. (Contributor), Olivares Pino, P. S. A. (Contributor), Oliveira Damazio, D. D. (Contributor), Oliver Garcia, G. E. (Contributor), Olszewski, A. (Contributor), Olszowska, J. (Contributor), Onofre, A. (Contributor), Onyisi, P. U. E. (Contributor), Oram, C. J. (Contributor), Oreglia, M. J. (Contributor), Oren, Y. (Contributor), Orestano, D. (Contributor), Orlando, N. (Contributor), Oropeza Barrera, B. C. (Contributor), Orr, R. S. (Contributor), Osculati, B. (Contributor), Ospanov, R. (Contributor), Otero y Garzon, Y. G. G. (Contributor), Otono, H. (Contributor), Ouchrif, M. (Contributor), Ouellette, E. A. (Contributor), Ould-Saada, F. (Contributor), Ouraou, A. (Contributor), Oussoren, K. P. (Contributor), Ouyang, Q. (Contributor), Ovcharova, A. (Contributor), Owen, M. (Contributor), Ozcan, V. E. (Contributor), Ozturk, N. (Contributor), Pachal, K. (Contributor), Pacheco Pages, P. A. (Contributor), Padilla Aranda, A. C. (Contributor), Pagáčová, M. (Contributor), Pagan Griso, G. S. (Contributor), Paganis, E. (Contributor), Pahl, C. (Contributor), Paige, F. (Contributor), Pais, P. (Contributor), Pajchel, K. (Contributor), Palacino, G. (Contributor), Palestini, S. (Contributor), Palka, M. (Contributor), Pallin, D. (Contributor), Palma, A. (Contributor), Palmer, J. D. (Contributor), Pan, Y. B. (Contributor), Panagiotopoulou, E. (Contributor), Panduro Vazquez, V. J. G. (Contributor), Pani, P. (Contributor), Panikashvili, N. (Contributor), Panitkin, S. (Contributor), Pantea, D. (Contributor), Paolozzi, L. (Contributor), Papadopoulou, T. D. (Contributor), Papageorgiou, K. (Contributor), Paramonov, A. (Contributor), Paredes Hernandez, H. D. (Contributor), Parker, M. A. (Contributor), Parodi, F. (Contributor), Parsons, J. A. (Contributor), Parzefall, U. (Contributor), Pasqualucci, E. (Contributor), Passaggio, S. (Contributor), Passeri, A. (Contributor), Pastore, F. (Contributor), Pastore, F. (Contributor), Pásztor, G. (Contributor), Pataraia, S. (Contributor), Patel, N. D. (Contributor), Pater, J. R. (Contributor), Patricelli, S. (Contributor), Pauly, T. (Contributor), Pearce, J. (Contributor), Pedersen, M. (Contributor), Pedraza Lopez, L. S. (Contributor), Pedro, R. (Contributor), Peleganchuk, S. V. (Contributor), Pelikan, D. (Contributor), Peng, H. (Contributor), Penning, B. (Contributor), Penwell, J. (Contributor), Perepelitsa, D. V. (Contributor), Perez Codina, C. E. (Contributor), Pérez García-Estañ, G. M. T. (Contributor), Perez Reale, R. V. (Contributor), Perini, L. (Contributor), Pernegger, H. (Contributor), Perrino, R. (Contributor), Peschke, R. (Contributor), Peshekhonov, V. D. (Contributor), Peters, K. (Contributor), Peters, R. F. Y. (Contributor), Petersen, B. A. (Contributor), Petersen, T. C. (Contributor), Petit, E. (Contributor), Petridis, A. (Contributor), Petridou, C. (Contributor), Petrolo, E. (Contributor), Petrucci, F. (Contributor), Pettersson, N. E. (Contributor), Pezoa, R. (Contributor), Phillips, P. W. (Contributor), Piacquadio, G. (Contributor), Pianori, E. (Contributor), Picazio, A. (Contributor), Piccaro, E. (Contributor), Piccinini, M. (Contributor), Piegaia, R. (Contributor), Pignotti, D. T. (Contributor), Pilcher, J. E. (Contributor), Pilkington, A. D. (Contributor), Pina, J. (Contributor), Pinamonti, M. (Contributor), Pinder, A. (Contributor), Pinfold, J. L. (Contributor), Pingel, A. (Contributor), Pinto, B. (Contributor), Pires, S. (Contributor), Pitt, M. (Contributor), Pizio, C. (Contributor), Plazak, L. (Contributor), Pleier, M. (Contributor), Pleskot, V. (Contributor), Plotnikova, E. (Contributor), Plucinski, P. (Contributor), Poddar, S. (Contributor), Podlyski, F. (Contributor), Poettgen, R. (Contributor), Poggioli, L. (Contributor), Pohl, D. (Contributor), Pohl, M. (Contributor), Polesello, G. (Contributor), Policicchio, A. (Contributor), Polifka, R. (Contributor), Polini, A. (Contributor), Pollard, C. S. (Contributor), Polychronakos, V. (Contributor), Pommès, K. (Contributor), Pontecorvo, L. (Contributor), Pope, B. G. (Contributor), Popeneciu, G. A. (Contributor), Popovic, D. S. (Contributor), Poppleton, A. (Contributor), Portell Bueso, B. X. (Contributor), Pospisil, S. (Contributor), Potamianos, K. (Contributor), Potrap, I. N. (Contributor), Potter, C. J. (Contributor), Potter, C. T. (Contributor), Poulard, G. (Contributor), Poveda, J. (Contributor), Pozdnyakov, V. (Contributor), Pralavorio, P. (Contributor), Pranko, A. (Contributor), Prasad, S. (Contributor), Pravahan, R. (Contributor), Prell, S. (Contributor), Price, D. (Contributor), Price, J. (Contributor), Price, L. E. (Contributor), Prieur, D. (Contributor), Primavera, M. (Contributor), Proissl, M. (Contributor), Prokofiev, K. (Contributor), Prokoshin, F. (Contributor), Protopapadaki, E. (Contributor), Protopopescu, S. (Contributor), Proudfoot, J. (Contributor), Przybycien, M. (Contributor), Przysiezniak, H. (Contributor), Ptacek, E. (Contributor), Puddu, D. (Contributor), Pueschel, E. (Contributor), Puldon, D. (Contributor), Purohit, M. (Contributor), Puzo, P. (Contributor), Qian, J. (Contributor), Qin, G. (Contributor), Qin, Y. (Contributor), Quadt, A. (Contributor), Quarrie, D. R. (Contributor), Quayle, W. B. (Contributor), Queitsch-Maitland, M. (Contributor), Quilty, D. (Contributor), Qureshi, A. (Contributor), Radeka, V. (Contributor), Radescu, V. (Contributor), Radhakrishnan, S. K. (Contributor), Radloff, P. (Contributor), Rados, P. (Contributor), Ragusa, F. (Contributor), Rahal, G. (Contributor), Rajagopalan, S. (Contributor), Rammensee, M. (Contributor), Randle-Conde, A. S. (Contributor), Rangel-Smith, C. (Contributor), Rao, K. (Contributor), Rauscher, F. (Contributor), Rave, T. C. (Contributor), Ravenscroft, T. (Contributor), Raymond, M. (Contributor), Read, A. L. (Contributor), Readioff, N. P. (Contributor), Rebuzzi, D. M. (Contributor), Redelbach, A. (Contributor), Redlinger, G. (Contributor), Reece, R. (Contributor), Reeves, K. (Contributor), Rehnisch, L. (Contributor), Reisin, H. (Contributor), Relich, M. (Contributor), Rembser, C. (Contributor), Ren, H. (Contributor), Ren, Z. L. (Contributor), Renaud, A. (Contributor), Rescigno, M. (Contributor), Resconi, S. (Contributor), Rezanova, O. L. (Contributor), Reznicek, P. (Contributor), Rezvani, R. (Contributor), Richter, R. (Contributor), Ridel, M. (Contributor), Rieck, P. (Contributor), Rieger, J. (Contributor), Rijssenbeek, M. (Contributor), Rimoldi, A. (Contributor), Rinaldi, L. (Contributor), Ritsch, E. (Contributor), Riu, I. (Contributor), Rizatdinova, F. (Contributor), Rizvi, E. (Contributor), Robertson, S. H. (Contributor), Robichaud-Veronneau, A. (Contributor), Robinson, D. (Contributor), Robinson, J. E. M. (Contributor), Robson, A. (Contributor), Roda, C. (Contributor), Rodrigues, L. (Contributor), Roe, S. (Contributor), Røhne, O. (Contributor), Rolli, S. (Contributor), Romaniouk, A. (Contributor), Romano, M. (Contributor), Romero Adam, A. E. (Contributor), Rompotis, N. (Contributor), Ronzani, M. (Contributor), Roos, L. (Contributor), Ros, E. (Contributor), Rosati, S. (Contributor), Rosbach, K. (Contributor), Rose, M. (Contributor), Rose, P. (Contributor), Rosendahl, P. L. (Contributor), Rosenthal, O. (Contributor), Rossetti, V. (Contributor), Rossi, E. (Contributor), Rossi, L. P. (Contributor), Rosten, R. (Contributor), Rotaru, M. (Contributor), Roth, I. (Contributor), Rothberg, J. (Contributor), Rousseau, D. (Contributor), Royon, C. R. (Contributor), Rozanov, A. (Contributor), Rozen, Y. (Contributor), Ruan, X. (Contributor), Rubbo, F. (Contributor), Rubinskiy, I. (Contributor), Rud, V. I. (Contributor), Rudolph, C. (Contributor), Rudolph, M. S. (Contributor), Rühr, F. (Contributor), Ruiz-Martinez, A. (Contributor), Rurikova, Z. (Contributor), Rusakovich, N. A. (Contributor), Ruschke, A. (Contributor), Rutherfoord, J. P. (Contributor), Ruthmann, N. (Contributor), Ryabov, Y. F. (Contributor), Rybar, M. (Contributor), Rybkin, G. (Contributor), Ryder, N. C. (Contributor), Saavedra, A. F. (Contributor), Sacerdoti, S. (Contributor), Saddique, A. (Contributor), Sadeh, I. (Contributor), Sadrozinski, H. (Contributor), Sadykov, R. (Contributor), Safai Tehrani, T. F. (Contributor), Sakamoto, H. (Contributor), Sakurai, Y. (Contributor), Salamanna, G. (Contributor), Salamon, A. (Contributor), Saleem, M. (Contributor), Salek, D. (Contributor), Sales De Bruin, D. B. P. H. (Contributor), Salihagic, D. (Contributor), Salnikov, A. (Contributor), Salt, J. (Contributor), Salvatore, D. (Contributor), Salvatore, F. (Contributor), Salvucci, A. (Contributor), Salzburger, A. (Contributor), Sampsonidis, D. (Contributor), Sanchez, A. (Contributor), Sánchez, J. (Contributor), Sanchez Martinez, M. V. (Contributor), Sandaker, H. (Contributor), Sandbach, R. L. (Contributor), Sander, H. G. (Contributor), Sanders, M. P. (Contributor), Sandhoff, M. (Contributor), Sandoval, T. (Contributor), Sandoval, C. (Contributor), Sandstroem, R. (Contributor), Sankey, D. P. C. (Contributor), Sansoni, A. (Contributor), Santoni, C. (Contributor), Santonico, R. (Contributor), Santos, H. (Contributor), Santoyo Castillo, C. I. (Contributor), Sapp, K. (Contributor), Sapronov, A. (Contributor), Saraiva, J. G. (Contributor), Sarrazin, B. (Contributor), Sartisohn, G. (Contributor), Sasaki, O. (Contributor), Sasaki, Y. (Contributor), Sauvage, G. (Contributor), Sauvan, E. (Contributor), Savard, P. (Contributor), Savu, D. O. (Contributor), Sawyer, C. (Contributor), Sawyer, L. (Contributor), Saxon, D. H. (Contributor), Saxon, J. (Contributor), Sbarra, C. (Contributor), Sbrizzi, A. (Contributor), Scanlon, T. (Contributor), Scannicchio, D. A. (Contributor), Scarcella, M. (Contributor), Scarfone, V. (Contributor), Schaarschmidt, J. (Contributor), Schacht, P. (Contributor), Schaefer, D. (Contributor), Schaefer, R. (Contributor), Schaepe, S. (Contributor), Schaetzel, S. (Contributor), Schäfer, U. (Contributor), Schaffer, A. C. (Contributor), Schaile, D. (Contributor), Schamberger, R. D. (Contributor), Scharf, V. (Contributor), Schegelsky, V. A. (Contributor), Scheirich, D. (Contributor), Schernau, M. (Contributor), Scherzer, M. I. (Contributor), Schiavi, C. (Contributor), Schieck, J. (Contributor), Schillo, C. (Contributor), Schioppa, M. (Contributor), Schlenker, S. (Contributor), Schmidt, E. (Contributor), Schmieden, K. (Contributor), Schmitt, C. (Contributor), Schmitt, S. (Contributor), Schneider, B. (Contributor), Schnellbach, Y. J. (Contributor), Schnoor, U. (Contributor), Schoeffel, L. (Contributor), Schoening, A. (Contributor), Schoenrock, B. D. (Contributor), Schorlemmer, A. L. S. (Contributor), Schott, M. (Contributor), Schouten, D. (Contributor), Schovancova, J. (Contributor), Schramm, S. (Contributor), Schreyer, M. (Contributor), Schroeder, C. (Contributor), Schuh, N. (Contributor), Schultens, M. J. (Contributor), Schultz-Coulon, H. (Contributor), Schulz, H. (Contributor), Schumacher, M. (Contributor), Schumm, B. A. (Contributor), Schune, P. (Contributor), Schwanenberger, C. (Contributor), Schwartzman, A. (Contributor), Schwegler, P. (Contributor), Schwemling, P. (Contributor), Schwienhorst, R. (Contributor), Schwindling, J. (Contributor), Schwindt, T. (Contributor), Schwoerer, M. (Contributor), Sciacca, F. G. (Contributor), Scifo, E. (Contributor), Sciolla, G. (Contributor), Scott, W. G. (Contributor), Scuri, F. (Contributor), Scutti, F. (Contributor), Searcy, J. (Contributor), Sedov, G. (Contributor), Sedykh, E. (Contributor), Seidel, S. C. (Contributor), Seiden, A. (Contributor), Seifert, F. (Contributor), Seixas, J. M. (Contributor), Sekhniaidze, G. (Contributor), Sekula, S. J. (Contributor), Selbach, K. E. (Contributor), Seliverstov, D. M. (Contributor), Sellers, G. (Contributor), Semprini-Cesari, N. (Contributor), Serfon, C. (Contributor), Serin, L. (Contributor), Serkin, L. (Contributor), Serre, T. (Contributor), Seuster, R. (Contributor), Severini, H. (Contributor), Sfiligoj, T. (Contributor), Sforza, F. (Contributor), Sfyrla, A. (Contributor), Shabalina, E. (Contributor), Shamim, M. (Contributor), Shan, L. Y. (Contributor), Shang, R. (Contributor), Shank, J. T. (Contributor), Shapiro, M. (Contributor), Shatalov, P. B. (Contributor), Shaw, K. (Contributor), Shehu, C. Y. (Contributor), Sherwood, P. (Contributor), Shi, L. (Contributor), Shimizu, S. (Contributor), Shimmin, C. O. (Contributor), Shimojima, M. (Contributor), Shiyakova, M. (Contributor), Shmeleva, A. (Contributor), Shochet, M. J. (Contributor), Short, D. (Contributor), Shrestha, S. (Contributor), Shulga, E. (Contributor), Shupe, M. A. (Contributor), Shushkevich, S. (Contributor), Sicho, P. (Contributor), Sidiropoulou, O. (Contributor), Sidorov, D. (Contributor), Sidoti, A. (Contributor), Siegert, F. (Contributor), Sijacki, D. (Contributor), Silva, J. (Contributor), Silver, Y. (Contributor), Silverstein, D. (Contributor), Silverstein, S. B. (Contributor), Simak, V. (Contributor), Simard, O. (Contributor), Simic, L. (Contributor), Simion, S. (Contributor), Simioni, E. (Contributor), Simmons, B. (Contributor), Simoniello, R. (Contributor), Simonyan, M. (Contributor), Sinervo, P. (Contributor), Sinev, N. B. (Contributor), Sipica, V. (Contributor), Siragusa, G. (Contributor), Sircar, A. (Contributor), Sisakyan, A. N. (Contributor), Sivoklokov, S. Y. (Contributor), Sjölin, J. (Contributor), Sjursen, T. B. (Contributor), Skottowe, H. P. (Contributor), Skovpen, K. Y. (Contributor), Skubic, P. (Contributor), Slater, M. (Contributor), Slavicek, T. (Contributor), Sliwa, K. (Contributor), Smakhtin, V. (Contributor), Smart, B. H. (Contributor), Smestad, L. (Contributor), Smirnov, S. Y. (Contributor), Smirnov, Y. (Contributor), Smirnova, L. N. (Contributor), Smirnova, O. (Contributor), Smith, K. M. (Contributor), Smizanska, M. (Contributor), Smolek, K. (Contributor), Snesarev, A. A. (Contributor), Snidero, G. (Contributor), Snyder, S. (Contributor), Sobie, R. (Contributor), Socher, F. (Contributor), Soffer, A. (Contributor), Soh, D. A. (Contributor), Solans, C. A. (Contributor), Solar, M. (Contributor), Solc, J. (Contributor), Soldatov, E. Y. (Contributor), Soldevila, U. (Contributor), Solodkov, A. A. (Contributor), Soloshenko, A. (Contributor), Solovyanov, O. V. (Contributor), Solovyev, V. (Contributor), Sommer, P. (Contributor), Song, H. Y. (Contributor), Soni, N. (Contributor), Sood, A. (Contributor), Sopczak, A. (Contributor), Sopko, B. (Contributor), Sopko, V. (Contributor), Sorin, V. (Contributor), Sosebee, M. (Contributor), Soualah, R. (Contributor), Soueid, P. (Contributor), Soukharev, A. M. (Contributor), South, D. (Contributor), Spagnolo, S. (Contributor), Spanò, F. (Contributor), Spearman, W. R. (Contributor), Spettel, F. (Contributor), Spighi, R. (Contributor), Spigo, G. (Contributor), Spiller, L. A. (Contributor), Spousta, M. (Contributor), Spreitzer, T. (Contributor), Spurlock, B. (Contributor), St. Denis, D. R. D. (Contributor), Staerz, S. (Contributor), Stahlman, J. (Contributor), Stamen, R. (Contributor), Stamm, S. (Contributor), Stanecka, E. (Contributor), Stanek, R. W. (Contributor), Stanescu, C. (Contributor), Stanescu-Bellu, M. (Contributor), Stanitzki, M. M. (Contributor), Stapnes, S. (Contributor), Starchenko, E. A. (Contributor), Stark, J. (Contributor), Staroba, P. (Contributor), Starovoitov, P. (Contributor), Staszewski, R. (Contributor), Stavina, P. (Contributor), Steinberg, P. (Contributor), Stelzer, B. (Contributor), Stelzer, H. J. (Contributor), Stelzer-Chilton, O. (Contributor), Stenzel, H. (Contributor), Stern, S. (Contributor), Stewart, G. A. (Contributor), Stillings, J. A. (Contributor), Stockton, M. C. (Contributor), Stoebe, M. (Contributor), Stoicea, G. (Contributor), Stolte, P. (Contributor), Stonjek, S. (Contributor), Stradling, A. R. (Contributor), Straessner, A. (Contributor), Stramaglia, M. E. (Contributor), Strandberg, J. (Contributor), Strandberg, S. (Contributor), Strandlie, A. (Contributor), Strauss, E. (Contributor), Strauss, M. (Contributor), Strizenec, P. (Contributor), Ströhmer, R. (Contributor), Strom, D. M. (Contributor), Stroynowski, R. (Contributor), Stucci, S. A. (Contributor), Stugu, B. (Contributor), Styles, N. A. (Contributor), Su, D. (Contributor), Su, J. (Contributor), Subramaniam, R. (Contributor), Succurro, A. (Contributor), Sugaya, Y. (Contributor), Suhr, C. (Contributor), Suk, M. (Contributor), Sulin, V. V. (Contributor), Sultansoy, S. (Contributor), Sumida, T. (Contributor), Sun, S. (Contributor), Sun, X. (Contributor), Sundermann, J. E. (Contributor), Suruliz, K. (Contributor), Susinno, G. (Contributor), Sutton, M. R. (Contributor), Suzuki, Y. (Contributor), Svatos, M. (Contributor), Swedish, S. (Contributor), Swiatlowski, M. (Contributor), Sykora, I. (Contributor), Sykora, T. (Contributor), Ta, D. (Contributor), Taccini, C. (Contributor), Tackmann, K. (Contributor), Taenzer, J. (Contributor), Taffard, A. (Contributor), Tafirout, R. (Contributor), Taiblum, N. (Contributor), Takai, H. (Contributor), Takashima, R. (Contributor), Takeda, H. (Contributor), Takeshita, T. (Contributor), Takubo, Y. (Contributor), Talby, M. (Contributor), Talyshev, A. A. (Contributor), Tam, J. Y. C. (Contributor), Tan, K. G. (Contributor), Tanaka, J. (Contributor), Tanaka, R. (Contributor), Tanaka, S. (Contributor), Tanaka, S. (Contributor), Tanasijczuk, A. J. (Contributor), Tannenwald, B. B. (Contributor), Tannoury, N. (Contributor), Tapprogge, S. (Contributor), Tarem, S. (Contributor), Tarrade, F. (Contributor), Tartarelli, G. F. (Contributor), Tas, P. (Contributor), Tasevsky, M. (Contributor), Tashiro, T. (Contributor), Tassi, E. (Contributor), Tavares Delgado, D. A. (Contributor), Tayalati, Y. (Contributor), Taylor, F. E. (Contributor), Taylor, G. N. (Contributor), Taylor, W. (Contributor), Teischinger, F. A. (Contributor), Teixeira Dias Castanheira, D. C. M. (Contributor), Teixeira-Dias, P. (Contributor), Temming, K. K. (Contributor), Ten Kate, K. H. (Contributor), Teng, P. K. (Contributor), Teoh, J. J. (Contributor), Terada, S. (Contributor), Terashi, K. (Contributor), Terron, J. (Contributor), Terzo, S. (Contributor), Testa, M. (Contributor), Teuscher, R. J. (Contributor), Therhaag, J. (Contributor), Theveneaux-Pelzer, T. (Contributor), Thomas, J. P. (Contributor), Thomas-Wilsker, J. (Contributor), Thompson, E. N. (Contributor), Thompson, P. D. (Contributor), Thompson, P. D. (Contributor), Thompson, R. J. (Contributor), Thompson, A. S. (Contributor), Thomsen, L. A. (Contributor), Thomson, E. (Contributor), Thomson, M. (Contributor), Thong, W. M. (Contributor), Thun, R. P. (Contributor), Tian, F. (Contributor), Tibbetts, M. J. (Contributor), Tikhomirov, V. O. (Contributor), Tikhonov, Y. A. (Contributor), Timoshenko, S. (Contributor), Tiouchichine, E. (Contributor), Tipton, P. (Contributor), Tisserant, S. (Contributor), Todorov, T. (Contributor), Todorova-Nova, S. (Contributor), Toggerson, B. (Contributor), Tojo, J. (Contributor), Tokár, S. (Contributor), Tokushuku, K. (Contributor), Tollefson, K. (Contributor), Tomlinson, L. (Contributor), Tomoto, M. (Contributor), Tompkins, L. (Contributor), Toms, K. (Contributor), Topilin, N. D. (Contributor), Torrence, E. (Contributor), Torres, H. (Contributor), Torró Pastor, P. E. (Contributor), Toth, J. (Contributor), Touchard, F. (Contributor), Tovey, D. R. (Contributor), Tran, H. L. (Contributor), Trefzger, T. (Contributor), Tremblet, L. (Contributor), Tricoli, A. (Contributor), Trigger, I. M. (Contributor), Trincaz-Duvoid, S. (Contributor), Tripiana, M. F. (Contributor), Trischuk, W. (Contributor), Trocmé, B. (Contributor), Troncon, C. (Contributor), Trottier-McDonald, M. (Contributor), Trovatelli, M. (Contributor), True, P. (Contributor), Trzebinski, M. (Contributor), Trzupek, A. (Contributor), Tsarouchas, C. (Contributor), Tseng, J. (Contributor), Tsiareshka, P. V. (Contributor), Tsionou, D. (Contributor), Tsipolitis, G. (Contributor), Tsirintanis, N. (Contributor), Tsiskaridze, S. (Contributor), Tsiskaridze, V. (Contributor), Tskhadadze, E. G. (Contributor), Tsukerman, I. I. (Contributor), Tsulaia, V. (Contributor), Tsuno, S. (Contributor), Tsybychev, D. (Contributor), Tudorache, A. (Contributor), Tudorache, V. (Contributor), Tuna, A. N. (Contributor), Tupputi, S. A. (Contributor), Turchikhin, S. (Contributor), Turecek, D. (Contributor), Turk Cakir, C. I. (Contributor), Turra, R. (Contributor), Tuts, P. M. (Contributor), Tykhonov, A. (Contributor), Tylmad, M. (Contributor), Tyndel, M. (Contributor), Uchida, K. (Contributor), Ueda, I. (Contributor), Ueno, R. (Contributor), Ughetto, M. (Contributor), Ugland, M. (Contributor), Uhlenbrock, M. (Contributor), Ukegawa, F. (Contributor), Unal, G. (Contributor), Undrus, A. (Contributor), Unel, G. (Contributor), Ungaro, F. C. (Contributor), Unno, Y. (Contributor), Unverdorben, C. (Contributor), Urbaniec, D. (Contributor), Urquijo, P. (Contributor), Usai, G. (Contributor), Usanova, A. (Contributor), Vacavant, L. (Contributor), Vacek, V. (Contributor), Vachon, B. (Contributor), Valencic, N. (Contributor), Valentinetti, S. (Contributor), Valero, A. (Contributor), Valery, L. (Contributor), Valkar, S. (Contributor), Valladolid Gallego, G. E. (Contributor), Vallecorsa, S. (Contributor), Valls Ferrer, F. J. A. (Contributor), Van Den Wollenberg, D. W. W. (Contributor), Van Der Deijl, D. D. P. C. (Contributor), van der Geer, D. G. R. (Contributor), van der Graaf, D. G. H. (Contributor), Van Der Leeuw, D. L. R. (Contributor), van der Ster, D. S. D. (Contributor), van Eldik, E. N. (Contributor), van Gemmeren, G. P. (Contributor), Van Nieuwkoop, N. J. (Contributor), van Vulpen, V. I. (Contributor), van Woerden, W. M. C. (Contributor), Vanadia, M. (Contributor), Vandelli, W. (Contributor), Vanguri, R. (Contributor), Vaniachine, A. (Contributor), Vankov, P. (Contributor), Vannucci, F. (Contributor), Vardanyan, G. (Contributor), Vari, R. (Contributor), Varnes, E. W. (Contributor), Varol, T. (Contributor), Varouchas, D. (Contributor), Vartapetian, A. (Contributor), Varvell, K. E. (Contributor), Vazeille, F. (Contributor), Vazquez Schroeder, S. T. (Contributor), Veatch, J. (Contributor), Veloso, F. (Contributor), Velz, T. (Contributor), Veneziano, S. (Contributor), Ventura, A. (Contributor), Ventura, D. (Contributor), Venturi, M. (Contributor), Venturi, N. (Contributor), Venturini, A. (Contributor), Vercesi, V. (Contributor), Verducci, M. (Contributor), Verkerke, W. (Contributor), Vermeulen, J. C. (Contributor), Vest, A. (Contributor), Vetterli, M. C. (Contributor), Viazlo, O. (Contributor), Vichou, I. (Contributor), Vickey, T. (Contributor), Vickey Boeriu, B. O. E. (Contributor), Viehhauser, G. H. A. (Contributor), Viel, S. (Contributor), Vigne, R. (Contributor), Villa, M. (Contributor), Villaplana Perez, P. M. (Contributor), Vilucchi, E. (Contributor), Vincter, M. G. (Contributor), Vinogradov, V. B. (Contributor), Virzi, J. (Contributor), Vivarelli, I. (Contributor), Vives Vaque, V. F. (Contributor), Vlachos, S. (Contributor), Vladoiu, D. (Contributor), Vlasak, M. (Contributor), Vogel, A. (Contributor), Vogel, M. (Contributor), Vokac, P. (Contributor), Volpi, G. (Contributor), Volpi, M. (Contributor), von der Schmitt, D. S. H. (Contributor), von Radziewski, R. H. (Contributor), von Toerne, T. E. (Contributor), Vorobel, V. (Contributor), Vorobev, K. (Contributor), Vos, M. (Contributor), Voss, R. (Contributor), Vossebeld, J. H. (Contributor), Vranjes, N. (Contributor), Vranjes Milosavljevic, M. M. (Contributor), Vrba, V. (Contributor), Vreeswijk, M. (Contributor), Vu Anh, A. T. (Contributor), Vuillermet, R. (Contributor), Vukotic, I. (Contributor), Vykydal, Z. (Contributor), Wagner, P. (Contributor), Wagner, W. (Contributor), Wahlberg, H. (Contributor), Wahrmund, S. (Contributor), Wakabayashi, J. (Contributor), Walder, J. (Contributor), Walker, R. (Contributor), Walkowiak, W. (Contributor), Wall, R. (Contributor), Waller, P. (Contributor), Walsh, B. (Contributor), Wang, C. (Contributor), Wang, C. (Contributor), Wang, F. (Contributor), Wang, H. (Contributor), Wang, H. (Contributor), Wang, J. (Contributor), Wang, J. (Contributor), Wang, K. (Contributor), Wang, R. (Contributor), Wang, S. M. (Contributor), Wang, T. (Contributor), Wang, X. (Contributor), Wanotayaroj, C. (Contributor), Warburton, A. (Contributor), Ward, C. P. (Contributor), Wardrope, D. R. (Contributor), Warsinsky, M. (Contributor), Washbrook, A. (Contributor), Wasicki, C. (Contributor), Watkins, P. M. (Contributor), Watson, A. T. (Contributor), Watson, I. J. (Contributor), Watson, M. F. (Contributor), Watts, G. (Contributor), Watts, S. (Contributor), Waugh, B. M. (Contributor), Webb, S. (Contributor), Weber, M. S. (Contributor), Weber, S. W. (Contributor), Webster, J. S. (Contributor), Weidberg, A. R. (Contributor), Weigell, P. (Contributor), Weinert, B. (Contributor), Weingarten, J. (Contributor), Weiser, C. (Contributor), Weits, H. (Contributor), Wells, P. S. (Contributor), Wenaus, T. (Contributor), Wendland, D. (Contributor), Weng, Z. (Contributor), Wengler, T. (Contributor), Wenig, S. (Contributor), Wermes, N. (Contributor), Werner, M. (Contributor), Werner, P. (Contributor), Wessels, M. (Contributor), Wetter, J. (Contributor), Whalen, K. (Contributor), White, A. (Contributor), White, M. J. (Contributor), White, R. (Contributor), White, S. (Contributor), Whiteson, D. (Contributor), Wicke, D. (Contributor), Wickens, F. J. (Contributor), Wiedenmann, W. (Contributor), Wielers, M. (Contributor), Wienemann, P. (Contributor), Wiglesworth, C. (Contributor), Wiik-Fuchs, L. A. M. (Contributor), Wijeratne, P. A. (Contributor), Wildauer, A. (Contributor), Wildt, M. A. (Contributor), Wilkens, H. G. (Contributor), Will, J. Z. (Contributor), Williams, H. H. (Contributor), Williams, S. (Contributor), Willis, C. (Contributor), Willocq, S. (Contributor), Wilson, A. (Contributor), Wilson, J. A. (Contributor), Wingerter-Seez, I. (Contributor), Winklmeier, F. (Contributor), Winter, B. T. (Contributor), Wittgen, M. (Contributor), Wittig, T. (Contributor), Wittkowski, J. (Contributor), Wollstadt, S. J. (Contributor), Wolter, M. W. (Contributor), Wolters, H. (Contributor), Wosiek, B. K. (Contributor), Wotschack, J. (Contributor), Woudstra, M. J. (Contributor), Wozniak, K. W. (Contributor), Wright, M. (Contributor), Wu, M. (Contributor), Wu, S. L. (Contributor), Wu, X. (Contributor), Wu, Y. (Contributor), Wulf, E. (Contributor), Wyatt, T. R. (Contributor), Wynne, B. M. (Contributor), Xella, S. (Contributor), Xiao, M. (Contributor), Xu, D. (Contributor), Xu, L. (Contributor), Yabsley, B. (Contributor), Yacoob, S. (Contributor), Yakabe, R. (Contributor), Yamada, M. (Contributor), Yamaguchi, H. (Contributor), Yamaguchi, Y. (Contributor), Yamamoto, A. (Contributor), Yamamoto, K. (Contributor), Yamamoto, S. (Contributor), Yamamura, T. (Contributor), Yamanaka, T. (Contributor), Yamauchi, K. (Contributor), Yamazaki, Y. (Contributor), Yan, Z. (Contributor), Yang, H. (Contributor), Yang, H. (Contributor), Yang, U. K. (Contributor), Yang, Y. (Contributor), Yanush, S. (Contributor), Yao, L. (Contributor), Yao, W. (Contributor), Yasu, Y. (Contributor), Yatsenko, E. (Contributor), Yau Wong, W. K. H. (Contributor), Ye, J. (Contributor), Ye, S. (Contributor), Yeletskikh, I. (Contributor), Yen, A. L. (Contributor), Yildirim, E. (Contributor), Yilmaz, M. (Contributor), Yoosoofmiya, R. (Contributor), Yorita, K. (Contributor), Yoshida, R. (Contributor), Yoshihara, K. (Contributor), Young, C. (Contributor), Young, C. J. S. (Contributor), Youssef, S. (Contributor), Yu, D. R. (Contributor), Yu, J. (Contributor), Yu, J. M. (Contributor), Yu, J. (Contributor), Yuan, L. (Contributor), Yurkewicz, A. (Contributor), Yusuff, I. (Contributor), Zabinski, B. (Contributor), Zaidan, R. (Contributor), Zaitsev, A. M. (Contributor), Zaman, A. (Contributor), Zambito, S. (Contributor), Zanello, L. (Contributor), Zanzi, D. (Contributor), Zeitnitz, C. (Contributor), Zeman, M. (Contributor), Zemla, A. (Contributor), Zengel, K. (Contributor), Zenin, O. (Contributor), Ženiš, T. (Contributor), Zerwas, D. (Contributor), Zevi della Porta, D. P. G. (Contributor), Zhang, D. (Contributor), Zhang, F. (Contributor), Zhang, H. (Contributor), Zhang, J. (Contributor), Zhang, L. (Contributor), Zhang, X. (Contributor), Zhang, Z. (Contributor), Zhao, Z. (Contributor), Zhemchugov, A. (Contributor), Zhong, J. (Contributor), Zhou, B. (Contributor), Zhou, L. (Contributor), Zhou, N. (Contributor), Zhu, C. G. (Contributor), Zhu, H. (Contributor), Zhu, J. (Contributor), Zhu, Y. (Contributor), Zhuang, X. (Contributor), Zhukov, K. (Contributor), Zibell, A. (Contributor), Zieminska, D. (Contributor), Zimine, N. I. (Contributor), Zimmermann, C. (Contributor), Zimmermann, R. (Contributor), Zimmermann, S. (Contributor), Zimmermann, S. (Contributor), Zinonos, Z. (Contributor), Ziolkowski, M. (Contributor), Zobernig, G. (Contributor), Zoccoli, A. (Contributor), zur Nedden, N. M. (Contributor), Zurzolo, G. (Contributor), Zutshi, V. (Contributor), Zwalinski, L. (Contributor) & Collaboration, A. (Creator), HEPData, 2015 DOI: 10.17182/hepdata.67349.v1, https://www.hepdata.net/record/ins1334140%3Fversion=1 View all 192 datasets
CommonCrawl
Taylor Series with Examples In this lesson we will learn about taylor series and with some examples of deriving taylor series of functions. What is Taylor series ? Taylor series is a representation of function as infinite sum of derivatives at a point. With the help of taylor series we could write a function as sum of its derivates at a point. Suppose we have a function f(x) then we can write it as : In general way taylor series formula can be written as: $ \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n !}(x-a)^{n} $ Taylor Series Examples Taylor series of sinx To find the taylor series of sinx we find the nth derivative of sinx which is : $ f^{(n)}(x)=\sin \left(x+\frac{n \pi}{2}\right) $ Evaluating the function and its derivatives at x = 0 we obtain $ \begin{aligned} f(0) &=\sin 0=0 \\ f^{\prime}(0) &=\sin (\pi / 2)=1 \\ f^{\prime \prime}(0) &=\sin \pi=0 \\ f^{\prime \prime \prime}(0) &=\sin (3 \pi / 2)=-1 \end{aligned} $ And So therefore expension of sinx at x=0 which also know as Maclaurin expension is given by : $ \sin x=x-\frac{x^{3}}{3 !}+\frac{x^{5}}{5 !}-\cdots $ Taylor series of cosx at $ x=\pi / 3 $ As in above example the nth derivative of cosx is given by $ f^{(n)}(x)=\cos \left(x+\frac{n \pi}{2}\right) $ Evaluating the function and its derivatives at $ x=\pi / 3 $ we obtain $ \begin{aligned} f(\pi / 3) &=\cos (\pi / 3)=1 / 2 \\ f^{\prime}(\pi / 3) &=\cos (5 \pi / 6)=-\sqrt{3} / 2 \\ f^{\prime \prime}(\pi / 3) &=\cos (4 \pi / 3)=-1 / 2 \end{aligned} $ Thus the taylor expension of cosx at $ x=\pi / 3 $ is given by : $ \cos x=\frac{1}{2}-\frac{\sqrt{3}}{2}(x-\pi / 3)-\frac{1}{2} \frac{(x-\pi / 3)^{2}}{2 !}+\cdots $ Maclaurin series examples Maclaurin series is a special case of taylor series when taylor series expension done at a=0 . Maclaurin series expension of sinx $ \sin x=x-\frac{x^{3}}{3 !}+\frac{x^{5}}{5 !}-\frac{x^{7}}{7 !}+\cdots $ for $ -\infty \lt x \lt -\infty $ Maclaurin series expension of cosx $ \cos x=1-\frac{x^{2}}{2 !}+\frac{x^{4}}{4 !}-\frac{x^{6}}{6 !}+\cdots $ for $ -\infty \lt x \lt -\infty $ Maclaurin series expension of tanx $ \tan ^{-1} x=x-\frac{x^{3}}{3}+\frac{x^{5}}{5}-\frac{x^{7}}{7}+\cdots $ for $ -1 \lt x \lt 1 $ Maclaurin series expension of ex $ e^{x}=1+x+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\frac{x^{4}}{4 !}+\cdots $ for $ -\infty \lt x \lt -\infty $ Furthur Reading Wikipedia Taylor series
CommonCrawl
Incorporating human mobility data improves forecasts of Dengue fever in Thailand Mathew V. Kiang1 na1, Mauricio Santillana2,3 na1, Jarvis T. Chen4, Jukka-Pekka Onnela5, Nancy Krieger4, Kenth Engø-Monsen6, Nattwut Ekapirat7, Darin Areechokchai8, Preecha Prempree8, Richard J. Maude7,9,11 na2 & Caroline O. Buckee10,11 na2 Scientific Reports volume 11, Article number: 923 (2021) Cite this article Over 390 million people worldwide are infected with dengue fever each year. In the absence of an effective vaccine for general use, national control programs must rely on hospital readiness and targeted vector control to prepare for epidemics, so accurate forecasting remains an important goal. Many dengue forecasting approaches have used environmental data linked to mosquito ecology to predict when epidemics will occur, but these have had mixed results. Conversely, human mobility, an important driver in the spatial spread of infection, is often ignored. Here we compare time-series forecasts of dengue fever in Thailand, integrating epidemiological data with mobility models generated from mobile phone data. We show that geographically-distant provinces strongly connected by human travel have more highly correlated dengue incidence than weakly connected provinces of the same distance, and that incorporating mobility data improves traditional time-series forecasting approaches. Notably, no single model or class of model always outperformed others. We propose an adaptive, mosaic forecasting approach for early warning systems. More than half the world's population is at risk of infection from the dengue virus, which causes an estimated 390 million infections1 and 25,000 deaths per year2,3. The dengue pathogen is spread in urban and peri-urban areas by invasive mosquitoes belonging to the Aedes complex. As a result, dengue has emerged as a major threat in the context of a rapidly urbanizing, globally connected world3,4,5. For example, despite the general decline in the incidence of other communicable diseases, the incidence of dengue fever has doubled every 10 years since 19906. The rapid geographic expansion of the vector suggests there will be a continuing emergence of dengue globally3,4,5. Currently, there is no drug treatment for dengue7,8 and only a partially effective vaccine, which cannot be used in seronegative individuals9. Therefore, despite the mixed results of vector control efforts8, targeted and thorough vector control approaches, hospital readiness, and risk communication can improve public health preparedness for seasonal outbreaks. Fundamental to the success of these preparations is data on the burden of disease in different areas, and some sense of how an epidemic may progress in the near term and on local spatial scales relevant to national control programs. Forecasting the epidemic trajectory of dengue on weekly or monthly timescales remains a relatively new science for infectious diseases10,11,12,13,14,15,16,17,18,19,20,21,22,23. Unlike weather and climate forecasting, where physical laws dictate the dynamics of the system, the social and biological dynamics that drive infectious disease outbreaks make forecasting dengue epidemics challenging. Recurring epidemics, as opposed to novel pathogens emerging for the first time, occur against a backdrop of shifting population immunity, which is difficult to quantify. Complicating surveillance, pathogens like dengue are primarily reported based on symptoms rather than laboratory confirmation. Like influenza and malaria, dengue causes non-specific symptoms, fever in particular, so reporting reliability and time lags impact data quality24,25,26. Despite these complexities, routine forecasting is an important priority for national dengue control programs8,11. There has been a recent surge of interest and success in building forecasting models for seasonal epidemics of dengue fever10,11,12,13,14,15,16,17,18,19,20,21. A distinction can be made between mechanistic epidemiological models and statistical models. In mechanistic models, the mode of transmission (in this case, mosquito-borne and strong temperature dependence) is built into the model and drives the predicted infection dynamics. In contrast, statistical models rely on the identification of past epidemiological activity patterns and historical correlations with external data streams, often generated by human behavior on Internet search engines or social media, to monitor disease activity and predict future outbreaks. Mechanistic models aim at providing biological insight and a basis for interpretation, but for socially and environmentally complex infections like dengue, these models are often challenging to parameterize. Dengue is particularly challenging as it is composed of multiple immunologically distinct strains and relies on the interaction of mosquito and human population dynamics and microclimate variability. Metapopulation models have been developed to incorporate the spatial dynamics of dengue outbreaks, modeling each area with a set of location-specific parameters and linking the areas through estimated migration of individuals. Metapopulation models play in an important role in our understanding of epidemic outbreaks across spatial regions27,28,29, synchronicity between regions30, oscillations of epidemics31, and strategies to reduce transmission32. Despite their importance in understanding dynamics, mechanistic models, and metapopulation models in particular, may lack sufficient data for appropriate parameterization, and are often not feasible in a forecasting context. As a result, statistical models have been more successful for outbreak preparedness for which the modeling goal is to provide quantitative, relatively short-term predictions with explicit uncertainty10,12,13,14,15,16,17,18,19,20,21,27,28,29. Most statistical forecasting approaches for dengue have been based on autocorrelation in case data, often incorporating environmental information due to the importance of temperature and other factors to the availability of mosquitoes and variation of the incubation period of the virus in the vector. Many of these have focused on long-term predictions of dengue at the city level15,21,33,34, or larger regions within a specific country14,16,17,19. Models often show mixed success with high prediction accuracy in the immediate forecasting horizons (e.g., 1–2 months) and rapid decay at longer time horizons (e.g. 3–6 months). It is unclear if weather or climate variables substantially improve forecasting; at least one study that systematically looked at different model parameters for autoregressive models, with and without a wide range of climate variables, across states in Mexico found no conclusive improvement12. More recently, ensemble models have become a powerful way to combine different approaches in order to leverage the strengths of each while minimizing the weaknesses23,35. This approach has recently been applied to dengue13. Others have incorporated new sources of data from internet search terms to predict dengue nationally18, employed novel statistical methods to predict dengue in San Juan, Puerto Rico36, or combined common climate covariates with generalized additive models to predict annual incidence of dengue hemorrhagic fever10. Although dengue outbreaks among new human populations, both across long distances37,38 and within local communities39, is spread primarily via human mobility5, incorporating this aspect of the spatial connectivity between locations within forecasting frameworks has been challenging. Current forecasting models, both mechanistic and statistical, either ignore or make crude assumptions about how populations are connected by travel. Parameterizing human mobility is challenging due to a paucity of relevant data streams, particularly in low-income settings. We have previously used mobile phone records to quantify national movements and showed that they provide improved prediction for dengue outbreaks in Pakistan5. Specifically, we used a gravity model to parametrize human mobility in a mechanistic framework because dengue was emerging into naïve populations, where statistical methods could not be used. Others have used daily commuting data to model mobility using a radiation model, which in turn is used to parameterize a mechanistic model40. Although considerable difficulty remains in accessing mobile phone records or other scalable data sources about mobility, it is clear that gravity models, radiation models, and other proxies for travel measures may perform poorly in many settings41. To date, almost all efforts to forecast dengue have either focused on optimizing a single modeling framework across regions, fitting parameters individually, or analyzed multiple models for a particular location. Few statistical models used for forecasting dengue incorporate spatial dependencies and none incorporate information about mobility patterns. Here, we contribute to the existing literature by using 7 years of monthly dengue data (2010–2016) from Thailand, which has a developed dengue surveillance program, and mobility data from approximately 11 million mobile phone subscribers to show that long-distance provinces that are more strongly connected by human mobility have more highly correlated dengue incidence than weakly connected provinces. We compare model structures incorporating time-series approaches or spatial dependencies, and mobility data, finding that this improves model prediction, but no individual approach provides the best performing model in all locations over all time horizons. We quantify the error for each province in Thailand, showing that provinces in the north of the country are more difficult to forecast with confidence than those in the south, regardless of model choice, and that different models' performances may be linked to demographic and social factors such as population density and gross provincial product per capita. We propose that mosaic forecasting approaches, which dynamically adapt over time and space, and end up using the best model for that location and time period, are likely to be the most effective for use in early warning systems in national control programs. No one-size-fits-all: forecasting performance varies in space and time We compared several forecasting approaches for the 77 Thai provinces to assess how model performance varied by region and over time, and to measure the impact of integrating the mobility data. Specifically, for each province, we fit four models: (1) local (non-spatially dependent) models commonly used for dengue; specifically, seasonal autoregressive integrated moving average models (Plain SARIMA) across a grid of parameters, (2) SARIMA models that use information from the top five most connected provinces (in terms of number of incoming trips) based on mobile phone data (CDR SARIMA), (3) SARIMA models that use information from the top five most connected provinces (in terms of predicted number of incoming trips) based on our gravity model estimates (Gravity SARIMA), and (4) a data-driven network approach, based on a regularized regression approach, that predicts dengue incidence in a given location potentially using dengue incidence from every other location as input (LASSO; see "Materials and methods" and reference49 for details). Figure 1 illustrates the results of all models at all forecasting horizons for Bangkok (see SI Appendix, Text S1 for online-only results for all other provinces). At early forecasting horizons (i.e., 1-month and up to 3-months ahead time horizons), all models performed well, with the CDR SARIMA and Gravity SARIMA models outperforming the Plain SARIMA models by about 5–10% (Fig. 2) as captured by the mean absolute error. After the 3-month forecasting horizon, the Plain SARIMA model performance drops substantially faster than all other models. Importantly, the grouping of out-of-sample prediction errors, across forecasting horizons, tended to be closer in the LASSO models, indicating that across forecasting horizons, the network models lose predictive power more slowly than the SARIMA-based models. We present all plots for all provinces in an online repository (SI Appendix, Text S1). Mean absolute error (MAE) for all Bangkok models. The mean absolute error (y-axis) expressed as number of cases for each model (x-axis) and for each forecast. Models are grouped as SARIMA with no exogenous variables (Plain), SARIMAs with the top 5 most connected regions based on the predicted trips from a gravity model (Gravity SARIMA), and SARIMAs with the top 5 most connected regions based on CDR data (CDR SARIMA). The rightmost models show a data-driven network model, denoted as LASSO, since it is based on a least absolute shrinkage and selection operator prediction model, and mosaic model. Comparing the best models for Bangkok, by model type. Focusing only on the best performing model for each model type and each time horizon, we show the relative mean absolute error (left panel) and the mean absolute error (right panel). On the left, the baseline of comparison is the traditional AR(1) model and the y-axis can be interpreted as the improvement over this baseline—i.e., a value of .9 indicates a 10% improvement. We show that both the Plain SARIMA (red) and CDR SARIMA (green) models perform better than the LASSO model at earlier forecasting horizons but perform worse at later horizons. Across other provinces, the observations of model performance for Bangkok are similar. Specifically, all models performing well in the near time, Gravity and CDR SARIMA models usually outperform Plain SARIMA models, and there is lower variation in prediction error when using the LASSO models. Importantly, no single model or class of model outperformed others across all provinces or all forecasting horizons (Fig. 3; SI Appendix, Fig. S5). We found that across all model types, provinces in the south of the country had lower prediction errors compared to those in the north of the country (Fig. 3). This difference in forecasting power was particularly pronounced on farther forecasting horizons. For example, when comparing the out-of-sample prediction errors of the CDR SARIMA to the Plain SARIMA, the CDR SARIMAs were worse in 8 tasks for forecasting horizons of 1–3 months and better in only 3 tasks with no statistically significant difference in the remaining 220 prediction tasks. However, for forecasting horizons of 4–6 months, the CDR SARIMA outperformed the Plain SARIMA in 40 tasks and only underperformed in 8 with no statistically significant difference in the remaining 183 tasks (SI Appendix, Fig. S6). Mean absolute error for the best model in each class at t + 1, t + 3, and t + 6 forecasting horizons for all provinces. The mean absolute error (y-axis) on the prediction (i.e., log) scale of the best model for each class for all provinces (x-axis). Provinces are ordered by latitude (x-axis, right is more northerly). There is a general decline in predictive power at farther forecasting horizons and at more northerly provinces; however, no single model or class of model performs best across all areas and all prediction horizons. We measured the characteristics of provinces in which different models performed better or worse and found that the Plain SARIMA models performed similarly when comparing top and bottom deciles of total number of dengue cases, median number of monthly dengue cases, median monthly rate of dengue, population density, and GPP per capita. In contrast, the LASSO and mobility-augmented SARIMA models performed better in places with higher total annual cases, higher population, and lower GPP per capita (see SI Appendix, Fig. S8–S13), suggesting systematic and generalizable differences in model performance that — with more validation and in combination with geographic variation in model performance—could be used to inform model choice. We show the feasibility of combining different classes of models by using a simple winner-takes-all voting system approach we named an adaptive mosaic model. This ensemble model selects the best performing model for each province and forecasting horizon based on the out-of-sample prediction error of previous 3 months, which allows the underlying base model to change over time (Fig. 4). When comparing the out-of-sample prediction errors to an AR(1) model, the mosaic model outperforms the AR(1) in 107 tasks (i.e., province and forecasting horizon), underperforms the AR(1) in 3 tasks, and is not statistically significantly different from an AR(1) in the remaining 352 tasks (SI Appendix, Fig. S7). At the 6 month forecasting horizon, a difficult prediction task for any model, we note that no models were able to predict the incidence peak in 2015; however, the adaptive mosaic model compensated more quickly and did not overshoot its prediction relative to the AR(1) model (Fig. 4). Further exploration of location-specific and task-specific voting predictions systems is outside of the scope of this study but should be explored in future research efforts. Mosaic model vs AR(1) for Bangkok at t + 1, t + 3, and t + 6 forecasting horizons. We show the predictions for a simple mosaic model at t + 1, t + 3, and t + 6 forecasting horizons for Bangkok in blue. For comparison, we show predictions from an AR(1) in red and observed cases in grey bars. Under each bar, we indicate the base model selected by the mosaic ensemble using a winter-takes-all approach based on the previous three out-of-sample prediction months. The t + 6 forecasting horizon presented a significant challenge for all models, but the mosaic model adapted more quickly and did not over-predict relative to the AR(1). Gravity models under-predict long-distance travel to and from Bangkok To assess the role of inter-province migration, we analyzed the call data records (CDR) of approximately 11 million mobile phone subscribers between August 1, 2017 and October 19, 2017. At the time of data collection, the mobile phone operator had about 26% of the market share and was the third largest provider in Thailand. Since travel patterns remained stable over our period of observation (coefficient of variation: 1.3%; SI Appendix, Fig. S1), we calculated average daily journeys between all pairs of provinces in both directions, and compared observed mobility in the CDR data to expected mobility based on gravity models (see "Materials and methods") assuming travel over our time period is consistent with travel for the rest of the year (SI Appendix, Fig. S2). We found that the routes of travel that deviate significantly from gravity model-based predictions in both directions are focused on Bangkok (Fig. 5), with more travel than expected from long distances around the country such as Phuket and Bangkok itself (Fig. 5, left), and less travel than expected within and around the city (Fig. 5, right). These hot and cold spots, where higher or lower than expected travel was observed, were robust to the gravity model coefficients used (SI Appendix, Table S1). Under- and over-prediction of outlier travel. Relative under-prediction (left) and over-prediction (right) comparing observed mobility data (from CDRs) to estimated mobility data from the best fit gravity model. We defined relative prediction error as 100%*(PredictedTrips − ObservedTrips)/ObservedTrips. We highlight only observations with Cook's distance greater than five times the average Cook's distance. Note that Bangkok (center of the map) is central to much of the over- and under-prediction outliers with most over-prediction near Bangkok. All plots were made using ggplot255 in R 4.0.156. At longer distances, strongly connected provinces show higher correlation in dengue incidence than weakly connected provinces In Thailand, dengue follows a seasonal cycle across all 77 provinces (Fig. 6), with variation in the timing of onset and epidemic peak in different locations over our period of observation42. We analyzed the correlation between clinical cases in each province with different time lags between all pairs of provinces. Figure 7 shows the relationship between the correlation in dengue cases between pairs of provinces, stratified with respect to geographic distance and mobility measured using mobile phone data. Consistent with previous studies43,44,45,46, the correlation of dengue incidence between provinces is strongest when they are close to each other and declines with distance and over time (i.e. the 3-month lagged correlation is weaker than the 1-month lagged correlation). For provinces less than 1,000 km apart, human mobility estimated using mobile phone data does not appear to impact the correlation of clinical cases. For longer distances, however, more highly connected locations show higher correlation in clinical dengue cases than locations the same distance apart but with low observed connectivity (Fig. 7). Note that some but not all of these long-distance connections are locations with international airports (SI Appendix, Fig. S3), and provinces connected by airports have higher correlation than those that are not connected by airports (SI Appendix, Fig. S4). Monthly dengue incidence by province. Monthly crude incidence of dengue (per 1000 person-years) by province (y-axis) ordered by centroid latitude (higher is more northern) over 7 years of observation (x-axis). Dengue in Thailand follows a seasonal cycle with geographic variation in both the timing of onset and peak of the epidemic. Correlation of province-level dengue by distance, at different time lags. We show the mean cross-correlation coefficient (y-axis) for pairs of provinces at binned distances (x-axis; 0 indicates correlation of an area with itself) for synchronous dengue (left panel) and lagged by 1 month (middle panel) and 3 months (right panel). The lines are separated based on the connectivity of pairs of provinces where the red line shows the bottom quartile of provinces in terms of incoming and outgoing travel and the blue line shows the top quartile. Bangkok, an important travel hub, is in the approximate center of Thailand and between 700 and 800 km from all other provinces, therefore the last two distance categories do not include Bangkok. Dengue forecasting remains an important public health challenge in Thailand and other endemic countries, especially at farther forecasting horizons. Given the complexity of dengue transmission, statistical forecasting approaches like those examined here have been shown to produce meaningful disease estimates in multiple locations and may therefore be suitable for immediate use by national control programs. In addition, we have shown that integrating additional data streams, such as information about human mobility, can improve forecasts in many areas, but the added benefit will be specific to the area and time horizon of interest. The interesting geographic variation in forecasting accuracy, which is not linked to population density or GPP per capita, may reflect the proximity to international borders with countries where frequent migration occurs. Overall, no single modeling approach can be expected to provide an optimal early warning system across all areas, even within a single country or region, or across all time horizons. So adaptive, mosaic forecasts are likely to provide the most effective approach. This type of approach could be easily integrated within the data platforms recently developed in Thailand11, which are flexible enough to accommodate different modeling approaches and forecasts. We show that simple network methods (that implicitly incorporate human mobility) can improve upon commonly-used local SARIMA models. Also, given that the network-based approach we studied relied only on dengue case count data routinely collected by most endemic countries, we envision that similar approaches may be easily extended, and may prove to be meaningful, in many other locations around the Globe. The regularized multi-variate regression framework can also flexibly identify and incorporate additional province-level data, time lags, and other factors in the predictive model, that could be used as a hypothesis-generation tool that may capture temporal changes in inter-regional human mobility. We highlight the fact that even though the mobility data we used covered only a small fraction of time represented in the dengue case data (3.2%; i.e., 81 days vs 7 years), it was still able to improve the local (non-augmented) SARIMA, suggesting that even relatively coarse travel information would improve naïve SARIMA models. Although mobile phone data is challenging to obtain, the coarse granularity of mobility information that we used completely protects individual subscriber privacy while adding substantially to forecasting performance. Since it is continuously collected, there is no reason these data could not be aggregated by mobile operators and provided on a relatively frequent basis to disease control programs. A limitation of using CDR to model dengue transmission is that it reflects movement patterns of the entire population whereas dengue tends to occur more in children and young adults in urban areas42. As governments prioritize how and where to spend money to improve dengue surveillance, our study suggests new regularized regression frameworks that incorporate mobility data can improve forecasts substantially. Any forecasting model will depend on the quality of the case data that it is trained upon, highlighting the primary importance of good epidemiological data. A limitation of this work is that most dengue cases in Thailand, as in most countries, are not confirmed with a diagnostic test, instead relying in syndromic surveillance. This can be unreliable with the case definition for dengue fever overlapping substantially with other causes of acute febrile illness and the completeness of the data relying on individual healthcare workers to complete the reporting forms. Thus, much of the money for better dengue forecasting should be focused on faster and better dengue case detection, more widespread diagnostic testing, sentinel surveillance of serotypes, a robust computational framework for sharing case data across regions to be analyzed centrally, and capacity building within control programs. In addition, we note that dengue in Thailand follows a cyclical, multi-year pattern of higher incidence44, which is not fully captured in our observation window of 7 years. In addition to better quality data, more historical data will be necessary for improving forecasting models that incorporate these longer period cycles. We are limited to the use of call detail records over the course of 81 days in a single year and must assume that the relative mobility in this period is representative of the full 7 years of clinical case data. Due to legal, regulatory, and logistical reasons, longer historical mobility patterns are often not feasible. For example, mobile operators do not store this information, and are often not allowed to store this information, for more than a few months. Previous research, using other sources of data for human mobility, suggests holiday and seasonal fluctuations in mobility affect the relative routes of travel within countries but that within-country mobility is remarkably stable47. Incorporating seasonal and holiday fluctuations in model predictions is an important area for future research. Similarly, our mobility data are memory-less and intermediate locations between two provinces are not recorded yet may play an important role in transmission dynamics. Additional mobility data, perhaps outside the regulatory constraints of mobile phone operators, is necessary to assess this possibility. Dengue incidence data We obtained monthly dengue case counts for over 7,000 subdistricts in Thailand from the Ministry of Public Health. These data are not available publicly and are used with the permission of the Ministry of Public Health. They consist of monthly dengue incidence counts from January 2010 through December 2016, by mutually-exclusive disease type (i.e., dengue fever, dengue shock syndrome, or dengue hemorrhagic fever). We aggregated these data to the province level and overall dengue case counts. In our data, there was a national average of 91,000 dengue cases per year with a range of 39,368 (2014) to 145,600 (2013) cases per year. To assess inter-province travel, we analyzed call data records (CDRs) of approximately 11 million mobile phone subscribers between August 1, 2017 and October 19, 2017. At the time of data collection, the mobile phone operator had about 26% of the market share and was the third largest provider in Thailand. In order to ensure the privacy of the mobile phone subscribers, and in compliance with national laws and the privacy policy of the Telenor group, special considerations were taken with the CDRs. First, only the mobile operator had access to the CDR and all data processing was performed on a server owned by, and only available to, the operator, thus ensuring that detailed records never left the operator or Thailand. Second, the operator provided researchers with a list of approximate cellular tower locations. For every tower location, we returned a corresponding, unlinked geographic identifier ("geocode") of the nearest subdistrict. Mobile operator employees then aggregated the detailed CDRs up to the researcher-provided geocodes. Further spatial and temporal aggregation was performed by the researchers. These data are not publicly available and are used with the permission of Telenor Research. To quantify travel, every subscriber was assigned a daily "home" location based on their most frequently used geocode. We tabulated daily travel between a subscriber's home location on one day relative to the day before. Trips were aggregated to geocode-to-geocode pairs for every day and thus are memoryless — preventing the ability to trace a user (or group of users) across more than two days or more than two areas. We normalized the number of trips from geocode i to geocode j by the number of subscribers at geocode i. We then multiplied this proportion by the estimated population at geocode i to get the flow from i to j. This assumes that subscribers are more or less uniformly distributed across provinces (weighted by the population in each province). While this assumption cannot be fully tested, there is a strong correlation (Pearson's r = 0.90) between subscribers and population for each province. On average, 11.4 million subscribers (16.7% of the total population) recorded at least one event (i.e., phone call, text message, internet activity) per day (SI Appendix, Fig. S1). At both the national and provincial levels, no significant deviations from the number of subscribers or the numbers of trips occurred during this time period. For example, at the national level, the coefficient of variation for daily number of subscribers was 1.3%. Therefore, we used the mean number of trips over this time period as our estimate of inter-province travel. Population, gross provincial product per capita, and distance estimates To estimate province-level population, we used the United Nations-adjusted 2015 population estimates from WorldPop48, which combines remote-sensing data with other data sources to create random-forest-generated population maps. Each file contains the estimated population per pixel and was overlaid with the official administrative shapefile. We then summed the value of all pixels within each province. We used publicly available 2015 gross provincial product per capita provided by the Office of the National Economic and Social Development Board of Thailand49. The concept of "distance" is flexible in the gravity model and geodesic distance often ignores important geographical (e.g., mountain ranges) or social and behavioral constants to human mobility. In addition to calculating geodesic distance between provinces, we calculated road distance and travel time based on OpenStreetMap data using Open Street Routing Machine50. Comparing observed and predicted travel We compared observed travel between provinces with CDRs to those estimated by a gravity model with three different measures of distance: geodesic distance, road distance, and travel time. The gravity model is a popular econometric model51, often used to estimate mobility between areas52. The basic gravity model is: $${Y}_{ij}=k \frac{{P}_{i}^{\alpha }{P}_{j}^{\beta }}{{D}_{ij}^{\gamma }}$$ where \({Y}_{ij}\) be the number of people who move from area \(i\) to area \(j\), \(k\) is a constant term, \({P}_{i}\) is the population in area \(i\), \({P}_{j}\) is the population in area \(j\), and \({D}_{ij}\) is some measure of distance between \(i\) and \(j\), noting that distance may not be symmetric. The parameters \(k\), \(\alpha\), \(\beta\), and \(\gamma\) are estimated by fitting a Poisson model: $$\mathrm{log}\left({Y}_{ij}\right)=k+\alpha {\mathrm{log}(P}_{i})+\beta \mathrm{log}\left({P}_{j}\right)-\gamma \mathrm{log}({D}_{ij}).$$ In addition to the naïve gravity model, we also adjusted for gross provincial product per capita. The best fit according to in-sample error metrics was the adjusted travel time model (SI Appendix, Table S1). We identified outlier observations as those observations with a Cook's distance greater than five times the mean Cook's distance. We evaluated the predictive accuracy of two different types of models: (1) one data-driven network approach built using an L1-regularized regression approach (the least absolute shrinkage and selection operator, LASSO) and (2) autoregressive integrated moving average (ARIMA) models both with and without a seasonal component (SARIMA). In addition, for the mobility-augmented autoregressive models, human mobility is accounted for by also including lagged case data from the top five areas (i.e., origins) of travelers as covariates in the model. We compared both sets of autoregressive models to the network approach predictions using a sliding window of observation and rolling forecast target as described below. Network models Based on a previous model designed to leverage spatially-correlated cases of influenza53, we fit a multivariate linear regression on the log of dengue case counts for area \(i\) in month \(t\) with the log of dengue case counts in areas \(j\) at time \(t-h\) where \(h\) is our forecasting horizon as the covariates. Let \({y}_{i,t}=\mathrm{ln}({c}_{i,t}+1)\) where \(c\) is the count of cases of location \(i\) at time \(t\): $${y}_{i,t}={\beta }_{0i}+\sum_{j=1}^{J}{\beta }_{j}{y}_{j,t-h}+\epsilon .$$ We used a sliding window of 42 months and h between 1 and 6. All values of \({y}_{i,t}\) were standardized to be mean-centered with unit variance in order to ensure the coefficients are not scale-dependent. For all prediction months, there were more areas, 77, or input variables, than observations, 42, and thus this formulation cannot be solved using an ordinary least squares (OLS) approach. To address this, we used an \({L}_{1}\) regularization approach to identify a parsimonious model that uses fewer variables as input than the number of available observations. This penalization approach acts to both prevent overfitting as well as selecting the most informative covariates (i.e., provinces). Specifically, we used the least absolute shrinkage and selection operator, LASSO, which minimizes the same objective function as a regular OLS while penalizing the number of non-zero coefficients with a hyper-parameter \(\lambda\): $$\mathrm{min}\left\{\frac{1}{N}{\Vert y-X\beta \Vert }_{2}^{2}+\lambda {\Vert \beta \Vert }_{1}\right\}$$ where the magnitude of the hyper-parameter \(\lambda\) is identified using cross validation on the training set. This approach shrinks the coefficients of non-informative or redundant areas to zero and provides for straightforward interpretation of the results allowing for identification of which areas contributed the most predictive power for any given window of observation and target area. Autoregressive models As a baseline for comparing model predictions, we used autoregressive integrated moving average (ARIMA) models, which is a common time series method applied to epidemiological modeling and dengue forecasting. These models have been used extensively in dengue prediction efforts and often incorporate a seasonal component called Seasonal ARIMA or SARIMA. Using the (p, d, q)(P, D, Q)s convention where p indicates the autoregressive order, d indicates the amount of differencing, and q indicates the order of the moving average. The seasonal component, (P, Q, D)s, represent the same parameters with a seasonal period of s months. Additional exogenous variables (i.e., timeseries) can be added as covariates in this framework. We reduced the parameter space of the SARIMA models using previous literature12 and our expert opinion. Specifically, we systematically search models with lags of up to 4 months (p = 1, 2, 3, or 4) or 3 years (P = 1, 2, or 3) and include a differencing order up to 1 (d and D = 0 or 1) and exclude all moving averages (q and Q fixed at 0). This results in a set of 15 model parameterizations: eight non-seasonal ARIMAs and seven seasonal ARIMAs. For each parameterization, we perform a univariate SARIMA as well as a mobility-augmented SARIMA. The mobility-augmented SARIMA incorporates the timeseries of cases from the top five connected areas, based on observed mobility, as exogenous covariates. Similar to the LASSO, we used a sliding window of 42 months, and in the case of augmented SARIMA models, we lagged the exogenous covariates by \(h\). Adaptive mosaic model We show the feasibility of combining different classes of the above models by using an ensemble approach we call the "adaptive mosaic model." For each province and forecasting horizon, we select the best performing model using a winner-takes-all approach based on the out-of-sample prediction error of the previous 3 months. By repeating this procedure for every prediction month, forecasting horizon, and province, the underlying base model can adapt over time (Fig. 7). Accuracy metrics and model comparison Consistent with previous research10,54, when assessing predictive performance of a single model, we used mean absolute error (MAE) and when assessing the relative performance of two models, we used relative mean absolute error (relMAE). The MAE of the log transformed counts is as follows: $$MAE=\frac{1}{T}\sum_{t=1}^{T}|\mathrm{ln}\left({y}_{t}+1\right)-\mathrm{ln}({\widehat{y}}_{t}+1)|$$ where \({y}_{t}\) and \(\widehat{{y}_{t}}\) are the observed and average counts for prediction month \(t\). One strength of this approach is that the MAE will be the same regardless of magnitude as long as the ratios are the same (i.e., 100 and 110 for predicted and observed will result in 1.1, just as 10 and 11 or 11 and 10). This is an important feature given the differences in population size and case counts between provinces. When comparing model \(A\) to model \(B\) at forecast horizon \(h\), we take the ratio of their MAEs: $$relMA{E}_{A,B,h}=\frac{MA{E}_{A,h}}{MA{E}_{B,h}}.$$ To assess the predictive performance of each model, we used retrospective out-of-sample estimates of the mean absolute error, assuming we only had data prior to the time of estimation and based on a 42-month sliding window of observation, such that all models are fit on 42-months of observation and evaluated on the out-of-sample forecast as the model slides forward through the remaining available data. For example, the 6-month prediction for June for 1 year would only include data up to December for the year before and only as far back as 42 months from that December. Since there are 7 years of data and we use half (42 months) to fit the model, this provides an additional 42 months to evaluate prediction error as the window of observation slides forward (noting that the number of months available in the evaluation period is also a function of the prediction horizon). To compare across multiple models (e.g., to find the model with the best t + 1 month forecast in a single province), we used the baseline AR(1) (i.e., ARIMA(1,0,0)(0,0,0) with no exogenous variables) as our reference model. Thus, the relMAE can be interpreted as the relative under- or over-performance of our model compared to a standard epidemiological model, averaged over all prediction months. To assess the utility of call detail records, for each province and forecasting horizon we selected the best performing model of each class. We then compared the CDR SARIMA to each other class using a Wilcoxon signed-rank test to compare the out-of-sample prediction errors. Statistically significant differences are shown in the province-specific reports (SI Appendix, Text S1) and in Figure S7. Similarly, we compared the proposed mosaic model to a simple AR(1) using a Wilcoxon signed-rank test (SI Appendix, Fig. S8). Bhatt, S. et al. The global distribution and burden of dengue. Nature 496, 504–507 (2013). CAS PubMed PubMed Central Article ADS Google Scholar WHO. Dengue Fact Sheet (WHO, Geneva, 2018). Guzman, M. G. & Harris, E. Dengue. Lancet (London, England) 385, 453–465 (2015). Tatem, A. J., Hay, S. I. & Rogers, D. J. Global traffic and disease vector dispersal. Proc. Natl. Acad. Sci. USA 103, 6242–6247 (2006). CAS PubMed Article ADS PubMed Central Google Scholar Wesolowski, A. et al. Impact of human mobility on the emergence of dengue epidemics in Pakistan. Proc. Natl. Acad. Sci. 112, 11887–11892 (2015). Stanaway, J. D. et al. The global burden of dengue: An analysis from the Global Burden of Disease Study 2013. Lancet. Infect. Dis 16, 712–723 (2016). Halstead, S. B. Dengue vaccine development: A 75% solution?. Lancet (London, England) 380, 1535–1536 (2012). WHO. Global Strategy for Dengue Prevention and Control 2012–2020 (World Health Organization, Geneva, 2012). Dengue vaccine: WHO position paper, September 2018—Recommendations. Vaccine 37, 4848–4849 (2018). Lauer, S. A. et al. Prospective forecasts of annual dengue hemorrhagic fever incidence in Thailand, 2010–2014. Proc. Natl. Acad. Sci. 115, 201714457 (2018). Reich, N. G. et al. Challenges in real-time prediction of infectious disease: A case study of Dengue in Thailand. PLOS Negl. Trop.cal Dis. 10, e0004761 (2016). Johansson, M. A., Reich, N. G., Hota, A., Brownstein, J. S. & Santillana, M. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico. Sci. Rep. 6, 33707 (2016). Yamana, T. K., Kandula, S. & Shaman, J. Superensemble forecasts of dengue outbreaks. J. R. Soc. Interface 13, 20160410 (2016). Promprou, S., Jaroensutasinee, M. & Jaroensutasinee, K. Forecasting dengue haemorrhagic fever cases in Southern Thailand using ARIMA models. Dengue Bull. 30, 99–106 (2006). Choudhury, Z., Banu, S. & Islam, A. Forecasting dengue incidence in Dhaka, Bangladesh: A time series analysis. Dengue Bull. 32, 29–37 (2018). Hu, W., Clements, A., Williams, G. & Tong, S. Dengue fever and El Niño/Southern Oscillation in Queensland, Australia: A time series predictive model. Occup. Environ. Med. 67, 307 (2010). PubMed Article PubMed Central Google Scholar Gharbi, M. et al. Time series analysis of dengue incidence in Guadeloupe, French West Indies: Forecasting models using climate variables as predictors. BMC Infect. Dis. 11, 166 (2011). Yang, S. et al. Advances in using Internet searches to track dengue. PLoS Comput. Biol. 13, e1005607 (2017). PubMed PubMed Central Article CAS Google Scholar Martinez, E. Z., Silva, E. A. A. & Fabbro, A. L. A SARIMA forecasting model to predict the number of cases of dengue in Campinas, State of São Paulo, Brazil. Rev. Soc. Bras. Med. Trop. 44, 436–440 (2011). Hii, Y. L., Zhu, H., Ng, N., Ng, L. C. & Rocklöv, J. Forecast of dengue incidence using temperature and rainfall. PLoS Negl. Trop. Dis. 6, e1908 (2012). Eastin, M. D., Delmelle, E., Casas, I., Wexler, J. & Self, C. Intra- and interseasonal autoregressive prediction of dengue outbreaks using local weather and regional climate for a tropical environment in Colombia. Am. J. Trop. Med. Hygiene 91, 598–610 (2014). Baquero, O., Santana, L. & Chiaravalloti-Neto, F. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models. PLoS ONE 13, e0195065 (2018). Buczak, A. L. et al. Ensemble method for dengue prediction. PLoS ONE 13, e0189988 (2018). Olliaro, P. et al. Improved tools and strategies for the prevention and control of arboviral diseases: A research-to-policy forum. PLOS Negl. Trop. Dis. 12, e0005967 (2018). Scarpino, S. V., Meyers, L. & Johansson, M. A. Design strategies for efficient arbovirus surveillance. Emerg. Infect. Dis. 23, 642–644 (2017). Chretien, J.-P., Rivers, C. M. & Johansson, M. A. Make data sharing routine to prepare for public health emergencies. PLoS Med. 13, e1002109 (2016). Stolerman, L. M., Coombs, D. & Boatto, S. SIR-network model and its application to dengue fever. SIAM J. Appl. Math. 75, 2581–2609 (2015). MathSciNet MATH Article Google Scholar Arino, J. & van den Driessche, P. A multi-city epidemic model. Math. Popul. Stud. 10, 175–193 (2003). Liu, K. et al. Spatiotemporal patterns and determinants of dengue at county level in China from 2005–2017. Int. J. Infect. Dis. 77, 96–104 (2018). Lloyd, A. L. & Jansen, V. Spatiotemporal dynamics of epidemics: Synchrony in metapopulation models. Math. Biosci. 188, 1–16 (2004). MathSciNet PubMed MATH Article PubMed Central Google Scholar Lourenço, J. & Recker, M. Natural, persistent oscillations in a spatial multi-strain disease system with application to dengue. PLoS Comput. Biol. 9, e1003308 (2013). PubMed PubMed Central Article ADS Google Scholar Lee, S. & Castillo-Chavez, C. The role of residence times in two-patch dengue transmission dynamics and optimal strategies. J. Theor. Biol. 374, 152–164 (2015). Luz, P. M., Mendes, B. V., Codeço, C. T., Struchiner, C. J. & Galvani, A. P. Time series analysis of dengue incidence in Rio de Janeiro, Brazil. Am. J. Trop. Med. Hygiene 79, 933–939 (2008). Stolerman, L., Maia, P. & Kutz, J. N. Data-driven forecast of dengue outbreaks in Brazil: A critical assessment of climate conditions for different capitals. arXiv:1701.00166 (2016). Johansson, M. A. et al. An open challenge to advance probabilistic forecasting for dengue epidemics. Proc. Natl. Acad. Sci. 116, 24268–24274 (2019). CAS PubMed Article PubMed Central Google Scholar Ray, E. L., Sakrejda, K., Lauer, S. A., Johansson, M. A. & Reich, N. G. Infectious disease prediction with kernel conditional density estimation. Stat. Med. 36, 4908–4929 (2017). MathSciNet PubMed PubMed Central Article Google Scholar Nunes, M. R. et al. Air travel is associated with intracontinental spread of dengue virus serotypes 1–3 in Brazil. PLoS Negl. Tropical Dis. 8, e2769 (2014). Lourenço, J. & Recker, M. The 2012 Madeira dengue outbreak: Epidemiological determinants and future epidemic potential. PLoS Negl. Tropical Dis. 8, e3083 (2014). Stoddard, S. T. et al. House-to-house human movement drives dengue virus transmission. Proc. Natl. Acad. Sci. 110, 994–999 (2013). Zhu, G., Liu, J., Tan, Q. & Shi, B. Inferring the Spatio-temporal Patterns of Dengue Transmission from Surveillance Data in Guangzhou, China. PLoS Negl. Tropical Dis. 10, e0004633 (2016). Wesolowski, A., O'Meara, W., Eagle, N., Tatem, A. J. & Buckee, C. O. Evaluating spatial interaction models for regional mobility in Sub-Saharan Africa. PLoS Comput. Biol. 11, e1004267 (2015). PubMed PubMed Central Article ADS CAS Google Scholar Limkittikul, K., Brett, J. & L'Azou, M. Epidemiological trends of dengue disease in Thailand (2000–2011): A systematic literature review. PLoS Negl. Tropical Dis. 8, e3241 (2014). Salje, H. et al. Revealing the microscale spatial signature of dengue transmission and immunity in an urban population. Proc. Natl. Acad. Sci. 109, 9535–9538 (2012). Cummings, D. A. et al. Travelling waves in the occurrence of dengue haemorrhagic fever in Thailand. Nature 427, 344–347 (2004). Salje, H. et al. Dengue diversity across spatial and temporal scales: Local structure and the effect of host population size. Science 355, 1302–1306 (2017). van Panhuis, W. G. et al. Region-wide synchrony and traveling waves of dengue across eight countries in Southeast Asia. Proc. Natl. Acad. Sci. 112, 13069–13074 (2015). PubMed Article ADS CAS PubMed Central Google Scholar Kraemer, M. U. G. et al. Mapping global variation in human mobility. Nat. Hum. Behav. 4, 800–810 (2020). Gaughan, A. E., Stevens, F. R., Linard, C., Jia, P. & Tatem, A. J. High resolution population distribution maps for Southeast Asia in 2010 and 2015. PLoS ONE 8, e55882 (2013). NESDB. Gross Regional and Provincial Product Chain Measures 2015 (National Economic and Social Development Board of Thailand, Bangkok, 2017). Luxen, D. & Vetter, C. Real-time routing with OpenStreetMap data. In Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (2011) https://doi.org/10.1145/2093973.2094062. Tinbergen, J. Shaping the World Economy: Suggestions for an International Economic Policy (Twentieth Century Fund, New York, 1962). Lewer, J. J. & den Berg, H. A gravity model of immigration. Econ. Lett. 99, 164–167 (2008). Lu, F. S., Hattab, M. W., Clemente, C., Biggerstaff, M. & Santillana, M. Improved state-level influenza nowcasting in the United States leveraging Internet-based data and network approaches. Nat. Commun. 10, 147 (2019). Reich, N. G. et al. Case study in evaluating time series prediction models using the relative mean absolute error. Am. Stat. 70, 285–292 (2016). Wickham, H. ggplot2: Elegant Graphics for Data Analysis (Springer, New York, 2016). Team R. C. R. A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, 2018). RJM and NE were supported by Asian Development Bank TA-8656. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funders. MS was partially supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R01GM130668. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. COB and MS thank the Harvard Data Science Initiative for their support in partially funding this collaborative work. These authors contributed equally: Mathew V. Kiang and Mauricio Santillana. These authors jointly supervised this work: Richard J. Maude and Caroline O. Buckee. Department of Epidemiology and Population Health, Stanford University, Stanford, CA, USA Mathew V. Kiang Department of Pediatrics, Harvard Medical School, Boston, MA, USA Mauricio Santillana Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, USA Department of Social and Behavioral Sciences, Harvard T.H. Chan School of Public Health, Boston, MA, USA Jarvis T. Chen & Nancy Krieger Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA Jukka-Pekka Onnela Telenor Research, Oslo, Norway Kenth Engø-Monsen Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand Nattwut Ekapirat & Richard J. Maude Bureau of Vector Borne Disease, Ministry of Public Health, Nonthaburi, Thailand Darin Areechokchai & Preecha Prempree Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK Richard J. Maude Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA Caroline O. Buckee Center for Communicable Disease Dynamics, Harvard T.H. Chan School of Public Health, 677 Huntington Ave, 5th Floor, Boston, MA, 02115, USA Richard J. Maude & Caroline O. Buckee Jarvis T. Chen Darin Areechokchai Preecha Prempree C.O.B. conceptualized the study. M.V.K., C.O.B., and M.S. designed the methodology. K.E.-M., N.E., D.A., P.P., and R.J.M. curated the data. M.V.K. conducted all analyses. M.V.K., M.S., J.T.C., N.K., and C.O.B. interpreted the results. M.V.K. prepared the original draft. M.V.K., M.S., J.T.C., J.P.O., N.K., K.E.-M., N.E., D.A., P.P., R.J.M., and C.O.B. provided critical feedback. C.O.B. and R.J.M. supervised this work. M.V.K., M.S., J.T.C., J.P.O., N.K., K.E.-M., N.E., D.A., P.P., R.J.M., and C.O.B. reviewed and approved the submitted manuscript. Correspondence to Caroline O. Buckee. The authors declare no competing interests. Supplementary Information 1. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Kiang, M.V., Santillana, M., Chen, J.T. et al. Incorporating human mobility data improves forecasts of Dengue fever in Thailand. Sci Rep 11, 923 (2021). https://doi.org/10.1038/s41598-020-79438-0 About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights Scientific Reports ISSN 2045-2322 (online)
CommonCrawl
Does Euclidean geometry require a complete metric space? Note: I'm not sure how to tag this question so please fix tags as appropriate and necessary. This question relates to Tarski's axioms of Euclidean geometry, not Hilbert's axioms. It says on the same page (Theorem 3.1 -- see also this related question on Math.SE) that: For every model of Euclidean Plane Geometry (using Tarski's axioms) there is a real closed field $R$ such that $M \cong R \times R$ as models. From what I gather from nLab's page on real closed fields, these are strictly more general than the real numbers, for example the real algebraic numbers is also a real closed field. However, the real algebraic numbers are not metrically complete. (I think.) In other words, not every Cauchy sequence converges. Question: How can a real closed field, which is not metrically complete, serve as a model for Euclidean plane geometry? In particular, how can the real algebraic numbers serve as a model for Euclidean geometry? For example, such a field cannot express the ratio of the circumference of a circle to its diameter in Euclidean plane geometry since $\pi$ is transcendental and thus only contained in the metric completion of the rational numbers, but not the (real) algebraic completion. One of Tarski's axioms is described on the nLab page as being a "Dedekind cut axiom expressed in first order terms". If I remember real analysis correctly, Dedekind cuts allow one to construct the metric completion of the rational numbers (the real numbers). So if one of Tarski's axioms implies metric completeness (does it not? if not, why doesn't it?), then how can the real algebraic numbers, which aren't metrically complete (I think) serve as a model? I'm not sure if the Cantor-Dedekind axiom is one of Tarski's axioms or the same as the aforementioned "Dedekind cut axiom". If Tarski's axioms don't require a metrically complete space, then is the Cantor-Dedekind axiom not generally valid for models of Tarski's axioms? The answers to this related question seem to suggest that $\mathbb{Q}$ is insufficient for Euclidean geometry, which makes sense since $\mathbb{Q}$ is not real closed. However, it is unclear from the answers given if the problems with $\mathbb{Q}$ arise only because of the non-existence of algebraic irrational numbers (e.g. $\sqrt{2}$) or also because of the non-existence of transcendental irrational numbers, $\pi$. The latter would imply that the real algebraic numbers are also insufficient for Euclidean geometry. Also, one of the axioms given for Euclidean plane geometry (on p. 171, M.2) in Agricola, Friedrich's Elementary Geometry is that the plane is a complete metric space. I had assumed that these axioms were just a (possibly less rigorous) rewording of Tarski's axioms, but if non-complete metric spaces also serve as models for Euclidean geometry, then it would seem that this axiom is an invention/addition of the authors, which perhaps should have been mentioned explicitly. real-analysis soft-question euclidean-geometry model-theory axioms Chill2MachtChill2Macht $\begingroup$ I don't understand what you find puzzling about there being a strange or unexpected model of a first-order axiom system. Isn't asking "how can there by this weird model of axiomatized Euclidean plane geometry?" like asking "how can there be nonstandard (e.g. uncountable) models of first-order Peano arithmetic?" The answer is: because the axioms allow it, and it's the business of model theory to discover such allowances. The error is in thinking the axioms entailed metric completeness: the model theory shows they don't. $\endgroup$ – symplectomorphic Mar 8 '17 at 19:17 $\begingroup$ The point is that in Tarski's axioms, any point you can construct has coordinates which are real algebraic over any points used to construct it. This is why real closed fields are the natural level of generality. Tarski's axioms don't directly talk about things like the circumferences of circles. $\endgroup$ – Qiaochu Yuan Mar 8 '17 at 19:20 $\begingroup$ PS: You're engaging in equivocation when you write "if non-complete metric spaces also serve as models for Euclidean geometry, then it would seem ..." Those spaces serve as models for Tarki's axiomatization of Euclidean geometry, not Euclidean geometry, period. It seems to me bizarre to assume that anytime, in any context, an author says "Euclidean geometry," without further qualification, he or she must be referring to models of Tarksi's axioms. The author could be giving a rival axiomatization. Or the author could be referring to the standard model of $\mathbb{R}^2$. Etc. $\endgroup$ – symplectomorphic Mar 8 '17 at 19:32 $\begingroup$ William: I don't understand all the flack you are getting. I think this is a great question. It is well researched, shows an attempt to reconcile different mathematical intuitions, and demonstrates courage to branch out of your area of expertise. Furthermore, it isn't written in a tone which argues that there is something wrong with the mathematics. I wish more questions were like this. $\endgroup$ – Kyle Mar 9 '17 at 0:31 $\begingroup$ @William: What is your background in model theory? Do you know the compactness theorem? $\endgroup$ – Kyle Mar 9 '17 at 0:35 Note: Since I'm not confident in this answer, because it comes from quoting a claim in a Wikipedia article which doesn't have a citation, and because I know next to nothing about mathematical logic, I am making it community wiki. This answer may be wrong, so please take it with a grain of salt. The Wikipedia article about real closed fields (under "Model theory:...") says the following: Euclidean geometry (without the ability to measure angles) is also a model of the real field axioms, and thus is also decidable. That this is true seems to follow from Theorem 3.1 on the nLab page I mentioned earlier. The key point in the above claim is the part in parentheses, "without the ability to measure angles". It seems that Tarski's axioms allow us to do everything we might want in Euclidean geometry except measure angles. Not that this is impossible in all models of Tarski's axioms -- for instance it clearly is possible using $\mathbb{R}^2$ as a model. However, the fact that $\pi$ is transcendental and not algebraic seems to imply that it isn't possible to measure (all) angles using the real algebraic numbers as a model for Tarski's axioms of Euclidean geometry. (This is because the definition of angles in terms of radians uses $\pi$.) Thus, I wold conclude from this that the answer to my question likely is: Metric completeness is required in a model for Euclidean geometry if and only if one needs to measure angles. If one does not need to measure angles, then it is unnecessary. Note: It occurred to me recently that my question is similar in spirit to one asked earlier by someone else: In algebraic geometry, why do we use $\mathbb C$ instead of the algebraic closure of $\mathbb Q$? In particular, the answers to that question seem to indicate that, besides making measuring angles possible, metric completeness can sometimes make geometry more convenient. Also, I have to imagine that Tarski proved both the Lefschetz principle and his axioms for Euclidean geometry is not a coincidence. Namely both involve statements about the equivalence of (real) algebraically closed fields in first order logic. Chill2Macht Not the answer you're looking for? Browse other questions tagged real-analysis soft-question euclidean-geometry model-theory axioms or ask your own question. When is a metric space Euclidean, without referring to $\mathbb R^n$? In algebraic geometry, why do we use $\mathbb C$ instead of the algebraic closure of $\mathbb Q$? What's a non-standard model of Tarskian Euclidean geometry? Is metric (Cauchy) completeness "outside the realm" of first order logic? Approximating Euclidean geometry, restricted to $\mathbb{Q}$ Definition of Euclidean space based on axioms used to prove theorems in Euclidean geometry Is Euclidean Geometry complete and unique Connecting Coordinate Geometry and Plane Geometry Gödel's (in)completeness theorems and the axiomatization of Euclidean geometry Which kinds of geometry have an angle measure? Do the real algebraic numbers satisfy a type of completeness axiom excluding free variables? Do there exist non-linear models of Euclidean geometry? How modular are Tarski's axioms of Euclidean geometry? Results of projective and Euclidean geometry
CommonCrawl
Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples Colin Ophus ORCID: orcid.org/0000-0003-2348-85581, Haider I Rasool2,3, Martin Linck4, Alex Zettl2,3 & Jim Ciston1 Advanced Structural and Chemical Imaging volume 2, Article number: 15 (2016) Cite this article We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si3N4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and lattice fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves. Hardware aberration correction for electron beams in transmission electron microscopy (TEM) is now widespread, substantially improving the interpretable resolution in TEM micrographs [1–4]. This technology is enabled by the combination of two factors; the ability to accurately measure optical aberrations in the electron beam, and a system of multipole lenses that can compensate for these measured aberrations. Many authors have studied the problem of direct aberration measurement, and most solutions involve capturing a Zemlin tableau [5–8]. This method requires a thin, amorphous object that can approximate an ideal weak-phase object. Many samples of interest however are partially or fully crystalline. Thus, aberrations must be measured and corrected on an amorphous sample region before micrographs can be recorded on the region of interest. During this delay, the aberrations may drift due to electronic instabilities in the microscope [9], and this factor coupled with imperfect hardware correction can lead to residual aberrations in the resulting electron plane wave measurements. One possible solution is to reconstruct the complex electron wavefunction via inline holography, by taking a defocus series and employing an exit wave reconstruction (EWR) algorithm such as Gerchberg-Saxton or the Transport of Intensity Equation [10–16]. Alternatively, an exit wave can be reconstructed by interferometric methods, i.e. off-axis electron holography [17, 18]. We can then estimate the residual aberrations and apply a numerical phase plate to the reconstructed complex wavefunction to produce aberration-free images [19]. These numerical corrections fall into two categories; manual correction, where the operator attempts to determine the aberrations present by trial and error, and automatic correction where the aberrations are directly measured in some manner. While the theory of aberration determination from a thin, amorphous sample is well-understood (and used to calibrate the hardware corrector on a modern TEM) [20–22], purely crystalline samples are much more difficult to correct due to the sparsity of diffraction space information [23]. If the sample is a low-index zone axis image of a crystal, there is no simple Fourier space technique to measure residual aberrations for a sample of unknown thickness or composition. Some authors have proposed using entropy methods [24] or measuring atomic column asymmetry within Fourier space [25] to measure residual aberrations. However, the first method requires well-separated atomic columns and the second can have difficulty measuring multiple simultaneous aberrations. We also note that some authors have used converged scanning transmission electron microscopy (STEM) probes to directly evaluate the aberration coefficients from crystalline samples [26–28], but these methods are not directly applicable to plane wave TEM measurements. In this study, we propose a new method to measure aberrations from TEM images of crystalline samples containing on-axis atomic columns or single atoms. We use these measurements of residual aberrations to iteratively correct the complex exit wave until convergence is reached. Our method requires only a rough guess of the projected crystal structure and a regular (undefected) crystalline region in the image field of view. We test this method on three experimental datasets, focal series reconstructions of a β-Si3N4 wedge with O and N doping and a single-layer graphene grain boundary, and an off-axis hologram measurement of a Si wedge. Calculating images with radial point symmetry HRTEM images of thin, crystalline samples oriented along low-index zone axes usually have a high degree of radial point symmetry, around each atomic (or atomic column) coordinate. When multiple peaks are close together, interference between adjacent columns can create amplitude or phase images that appear to break the radial symmetry. However this symmetry breaking is often due to constructive and destructive interference of the underlying complex wave, and the overall exit wave can still be well-described as a sum of isolated, radially-symmetric complex atomic shape functions. To demonstrate this, we have simulated several examples of exit waves of a silicon sample using the multislice method [29], the amplitudes of which are plotted in Fig. 1a. a Simulated exit waves of Si at different thicknesses and zone axes. b Symmetrized exit waves from a. c, d Real and imaginary parts of fitted atomic shape functions The first two simulations in Fig. 1a, the [001] and [111] zone axes, have equally spaced atomic columns which show local radial symmetry around each peak. The third and fourth simulations in Fig. 1a contain Si dumbbells and appear to have broken radial symmetry at much shorter distances. These images however can be well-described by a sum of identical, radially-symmetric atomic peak shape functions, shown in Fig. 1b–d. A point-symmetrized image can be calculated using a few simple steps. First, the atomic coordinates must be estimated (from a known structure) or fitted to the image. Each exit wave pixel value \(\psi (x,y)\) is equal to $$\begin{aligned} \psi (x,y)=A_0+\sum _{j=1}^J\sum _{k=1}^{K_J}s_j\left[ \sqrt{(x-x_k^j)^2 + (y-y_k^j)^2} \right] , \end{aligned}$$ where \(A_0\) is a constant carrier wave value, there are J atom types included, \(\mathbf {s}_j(|(x,y)|)\) is the complex atomic shape function for each atom type J, and there are \(K_J\) atoms of type J, located at coordinates \((x_k^j,y_k^j)\). Next, we calculate an atomic distance matrix \(\mathbf {A}\) which relates all image pixels to their distances to all nearby atomic coordinates. Each row of this matrix corresponds to a different image pixel (x, y), while the columns represent all possible (rounded) distances to all nearby atomic sites, divided up into different atomic species. This matrix is moderately sparse, where the only non-zero values are ones in the first column (corresponding to \(A_0\)) and ones at the rounded distances of all atoms within some cutoff radius. This formalism allows us to solve for discretized atomic shape function(s) \(\mathbf {s}_j\) using the set of linear equations given by $$\begin{aligned} \mathbf {A} \begin{bmatrix} A_0 \\ s_1 \\ \vdots \\ s_J \end{bmatrix} = \psi (x,y) , \end{aligned}$$ which can be solved using typical regression methods. This symmetrization method has been applied in the examples shown in Fig. 1b, where the fitted atomic shape functions are given in Fig. 1c, d. In all cases, the symmetrized exit wave is in perfect or good agreement with the original exit waves shown in Fig. 1a. This simple method for calculating point-symmetrized exit waves forms the basis of the aberration correction algorithm presented here. Note that while we have chosen to solve the peak shape functions in real space, it is also possible to deconvolve a point spread function in Fourier space, similar to the method described by van den Broek et al. [30]. The real-space method simplifies handling of the boundary conditions (by simply not including pixels that are not surrounded by enough atomic coordinates) and can easily handle multiple atom types. Finally we note that a constant value (setting all non-zero values equal to ones) does not need to be assumed for all atomic sites; instead a complex value at the peak coordinate location can be directly measured from the exit wave, or refined by least squares. This step improves accuracy if the reference region used for solving the residual aberrations has non-constant thickness. Coherent wave aberrations A complex exit wave \(\psi (x,y)\) measured with off-axis holography or reconstructed from inline holography that contains residual aberrations described by the Fourier-space aberration function \(\chi (q_x,q_y)\) is related to the aberration-free exit wave \(\psi _0(x,y)\) by the expression [29] $$\begin{aligned} \Psi (q_x,q_y) = \Psi _0(q_x,q_y) \exp \left[ -i \chi (q_x,q_y) \right] \end{aligned}$$ where \(\Psi (q_x,q_y)\) and \(\Psi _0(q_x,q_y)\) are the 2D Fourier transforms of \(\psi (x,y)\) and \(\psi _0(x,y)\) respectively. The vector (x, y) and \((q_x,q_y)\) represent the real space and Fourier space coordinate systems respectively. The aberration function used here is the basis function $$\begin{aligned} \chi (q_x,q_y)= & {} \sum _{m=0} \sum _{n=0} \left[ \lambda ^2 ({q_x}^2 + {q_y}^2) \right] ^{m+n/2} \nonumber \\&\cdot \left\{ C_{m,n}^x \cos \left[ n \cdot \mathrm {atan2} \left( q_y,q_x \right) \right] \right. \nonumber \\&\left. + \,C_{m,n}^y \sin \left[ n \cdot \mathrm {atan2}\left( q_y, q_x) \right) \right] \right\} , \end{aligned}$$ where \((C_{m,n}^x,C_{m,n}^y)\) are the coefficients of the two orthogonal aberrations of order (m, n) in units of radians, and \(\mathrm {atan2}(q_y, q_x)\) is the arctangent function which returns the correct sign in all quadrants (all combinations of signs of \(q_x\) and \(q_y\)). The radial magnitude of each aberration scales with \(|q|^{2m + n}\) and the rotation symmetry is given by n. Note that when \(n=0\), the aberration is radially symmetric (e.g. constant value, defocus, spherical aberration) and no \(C_{m,n}^y\) term is necessary. Various authors use different conventions for dimensioning the coefficients \((C_{m,n}^x,C_{m,n}^y)\) [7, 19, 31]. We also note that this function describes only coherent wave aberrations that are constant over the field of view (aplanatic). Estimating residual aberration coefficients We now show how symmetrized exit waves can be used to estimate aberrations in images of crystalline samples. As an example, we have simulated exit waves with synthetic aberrations in Fig. 2a, b, for a 19.8 nm thick [011]-Si sample. In all cases except for the aberration-free image, applying an aberration phase plate causes distortions in the atomic images. a Phase plates for synthetic aberrations applied to simulated Si [011] exit waves, giving b amplitudes images. c Symmetrized waves corresponding to b. d Fitted phase plate for aberrations up to 6th order. e Exit wave where phase plate in d is applied to images in b Next, a symmetrized image is calculated from the aberrated wave and the approximate peak positions, shown in Fig. 2c. The resulting images appear to be approximately aberration free due to the radial symmetry imposed by constructing an exit wave from radially-symmetric point atomic shape functions, and can be used to estimate the aberration function \(\chi (q_x,q_y)\). To generate this estimate, we calculate the windowed Fourier transforms of both the aberrated and symmetrized waves. A window function is used to prevent boundary errors. Next, we measure the difference in phase between the two FFTs and use weighted least squares to fit the aberration coefficients. The weighting function is set to the magnitude of the original exit wave Fourier transform. This ensures that the strongest Bragg components dominate the aberration function fit. Figure 2d shows the fitted aberration function, including all aberrations up to 6th order. The fits are a good, but not perfect, match to the real aberration functions in Fig. 2a. Applying the fitted aberration functions to the aberrated images produces the images plotted in Fig. 2e. Similar to the fitted aberration function, these images are improved but not yet free of aberrations. This estimation method for the aberration function can be applied iteratively to produce an accurate measurement of the residual aberration functions. Iterative algorithm for estimating residual aberrations Our proposed algorithm for correcting residual aberrations in complex exit waves of crystalline samples is diagrammed in Fig. 3. We start with a reference region in the exit wave \(\psi (x,y)\). This region should be roughly constant thickness and contain as few lattice defects as possible. Increasing the area of the reference region will improve the accuracy of the fitted aberrations, at the cost of increased computation time. From this reference region, we generate a list of atomic coordinates, and if multiple types of atoms are present, the corresponding site identities. Flow chart for the algorithm proposed in this work, labeled by steps a–g. All steps must be performed at during the first iteraiton, while additional iterations can begin at steps b, c or d Next we calculate the distance matrix \(\mathbf {A}\) between all pixels in the reference region and the atomic coordinates. This procedure is shown geometrically for a single pixel in Fig. 3c. We then use linear regression to solve for the complex atomic shape function for all species present. The distance matrix \(\mathbf {A}\), carrier wave value \(A_0\), and the shape functions \(\mathbf {s}_1 \ldots \mathbf {s}_J\) are then used to calculate a symmetrized exit wave. Subsequently, we compute a windowed Fourier transform of the current guess for the aberration-free exit wave (in the first iteration the measured exit wave is used) and the symmetrized wave. We measure the phase difference of these Fourier transforms, shown in Fig. 3f. We use weighted least squares to fit the aberration coefficients, where the Fourier transform amplitude of the exit wave is used as the weighting function. These aberration function coefficients are added to the current values from the previous iteration (originally initialized to zero). This fitted aberration function is then applied to the original exit wave as in Fig. 3g, generating an updated guess for the aberration-free exit wave. If the corrected exit wave update is below a user-defined threshold, we assume the algorithm is converged and output the result. If not, we perform additional iterations. The algorithm described in Fig. 3 has three possible re-entry points for additional iterations, shown by the dashed lines. If we assume the atomic positions are accurate, we do not need to update them or recalculate the distance matrix \(\mathbf {A}\). Since this is the most time-consuming step of the algorithm, skipping it for additional iterations saves most of the calculation time. Alternatively, the atomic positions can be updated by peak fitting or a correlation method, starting the next iteration at the step in Fig. 3b. If the atomic positions are accurate enough, there is one other possible update at the start of each iteration. Each atomic site can be updated with a complex scaling coefficient to approximate slight thickness changes in the reference region. Both of these alternative update steps require updating the distance matrix \(\mathbf {A}\), step Fig. 3c. Limitations of the method The algorithm for measuring and correcting residual wave aberrations described above requires a relatively flat, defect-free region within a portion of the full field-of-view. A small reference region will degrade the accuracy of the measured aberration function. In the experimental results shown below, the size of the reference region was \(\approx\)50 unit cells for the Si3N4 sample, \(\approx\)1000 unit cells for the graphene sample, and \(\approx\)150 unit cells for the silicon wedge. The accuracy of the residual aberration function also depends on the signal to noise and accuracy of the exit wave reconstruction or measurement. If the crystalline region of the sample contains random variation of the exit wave due to an amorphous layer on the surface, or systematic variations due to surface reconstruction, the resulting aberration function may contain small errors. This issue can be minimized by using as large a reference region as possible, and with good sample preparation methods. Another possible source of error is sample mis-tilt. Completely eliminating sample tilt is virtually impossible, and small amounts of sample tilt can mimic some residual aberration functions, in particular axial coma. Similarly, if the sample thickness changes linearly over the reference region, our method may fit a small amount of erroneous axial coma under some circumstances. However, because both of these effects heavily sample-dependent, it is impossible to assign firm numbers to the possible degree or error. In general we recommend using complementary measurements to verify results, such as measurement of mean atomic coordinates or unit cell dimensions or angles from x-ray diffraction. Multislice simulations were performed using Matlab code following the methods of Kirkland [29]. Unless otherwise noted, all simulations were performed at 300 kV, a pixel size of 0.05 \(\mathrm {\AA }\) and 32 frozen phonon configurations. An information limit of 1.5 \(\mathrm {\AA }^{-1}\) was enforced by applying an 8th order Butterworth filter to the exit waves in Fourier space. The exit waves were not further defocused after propagation through the sample, approximating a white atom contrast condition for all amplitude images. The Si3N4 sample was flat polished on one side using an Allied MultiPrep system, then mirror polished with 0.1 μm diamond paper. The second side was dimpled and finished with a 1.0 μm diamond slurry to a thickness of about 20 μm. The sample was then ion milled on a Gatan PIPS at 0 °C using 5 kV Ar ions at an angle of 5° for 3 h, then at 1 kV for 30 min, followed by 0.5 kV for 5 min. This latter sample was not carbon coated and was found to be stable under the beam operated at 300 kV. Focal series of this sample were recorded at 300 kV in the TEAM 0.5, an FEI TITAN-class microsope [3]. The corrector was tuned for bright atom contrast (C3 = −6 μm, C5 = 2.5 mm) and the monochromater was excited to provide an energy spread <0.15 eV at full width half maximum. The focal series were acquired with a step size of 1.72 nm ranging from −34.4 nm underfocus to 34.4 nm overfocus, recorded on a Gatan Ultrascan 1000. The graphene sample was grown at 1035 °C by chemical vapor deposition onto a polycrystalline copper substrate. The substrate was held at 150 mTorr hydrogen for 1.5 h before 400 mTorr methane was flowed over it for 15 min to grow single layer graphene [32]. This sample was imaged in the TEAM 0.5 microscope using mochromated, spherical aberration-corrected 80 kV imaging with the monochromater excited to provide an energy spread <0.15 eV at full width half maximum. A focal series of 5 images with a step size of 1.2 nm was recorded on a Gatan OneView detector. An off-axis hologram of a silicon wedge was recorded in the Cc-Cs-corrected TEAM I microscope operated at 80 kV accelerating voltage using an exposure time of 8 s on a Gatan Ultrascan 1000. The [011]-silicon sample was laser cut from a 3 mm disc down to as 1 mm to fit the TEAM stage geometry [33]. For hologram acquisition, the corrector was tuned to correct all aberrations, up to and including 3rd order, below the measurement accuracy of the aberrations. The exit wave was reconstructed from the hologram using simple numerical Fourier optics [17]. Focal series reconstruction and data analysis All focal series reconstructions and data processing except for the off-axis holographic reconstruction were performed using Matlab code. Focal series reconstructions were performed using the Gerchberg–Saxton algorithm [10] where the implementation for HRTEM is described fully in [11, 12]. During reconstruction sub-pixel image alignments were applied using the discrete Fourier transform method given in [34]. To quantify the atomic column positions in a complex image, we used nonlinear least squares to fit the peak positions using a two-dimensional elliptic Gaussian function for both the real and imaginary components. This peak function \(\beta (x, y)\) defined as $$\begin{aligned} \beta (x,y)=\, & {} b_1 \exp \left[ - b_2 \Delta x^2 - 2 b_3 \Delta x \Delta y - b_4 \Delta y^2 \right] \nonumber \\&+ i b_5 \exp \left[ - b_6 \Delta x^2 - 2 b_7 \Delta x \Delta y - b_8 \Delta y^2 \right] \nonumber \\&+ b_9 + i b_{10}, \end{aligned}$$ where \(b_1\) through \(b_{10}\) are the real fitting coefficients and \(\Delta x = x - x_0\) and \(\Delta y = y - y_0\) are the distances from the peak center \((x_0,y_0)\). For the Si-N dumbbells, two complex elliptic Gaussian functions were fitted to both peaks simultaneously. Exit wave reconstruction of Si\(_3\)N\(_4\) The first sample analyzed is a SiAlON wedge sample (isostructural to \(\upbeta\)-Si\(_3\)Al\(_4\) with Al and O doping the Si and N sites to give the composition Si\(_{5.6}\)Al\(_{0.4}\)O\(_{0.4}\)N\(_{7.6}\)), recorded at 300 kV along the [0001] direction. Density functional theory [35] and neutron-scattering studies [36] predict that O might preferentially dope the 2a sites with a nearby Al balancing the extra charge, causing a 21 pm shift in one of three directions [37]. X-ray diffraction by contrast shows no site preference for Al or O [38]. We therefore wish to measure the column positions with as high a precision as possible to evaluate the dopant-ordering hypothesis and its potential local variation at the nanoscale. The SiAlON wedge will be referred to as the Si\(_3\)N\(_4\) sample for the remainder of this paper. Figure 4 shows the application of the above method for measuring residual aberrations to a focal series reconstruction of the Si\(_3\)N\(_4\) sample. A reference region was selected near the middle of the reconstructed exit wave, the amplitude of which is shown in Fig. 4a. Two atom types are included (Si and N sites), and the same shape function is used for the two unique N sites. Figure 4c, d show that the aberration measurement is essentially converged after 5 iterations, and after 20 iterations the exit wave update approaches zero. The reconstruction algorithm therefore quickly evolves the aberration function coefficients towards the values which best approximate the point symmetrized exit wave. Note that the two are not exactly equivalent, as the experimental exit wave contains substantially more noise and can contain lattice distortions or small amounts of strain due to bending of the sample. No symmetrization is applied to the actual experimental exit wave in this approach. Furthermore, no additional filtering beyond the deconvolution with the residual wave aberration function and the informational limit cutoff have been applied to the experimental exit wave. The numerical aberration coefficients are given in the "Appendix". Residual aberration correction applied to a focal series reconstruction of a \(\beta\)-Si\(_3\)N\(_4\) sample. a Reference region of the exit wave with atomic sites shown on half of image. b Complex atomic shape functions \(\mathbf {s}_1\) and \(\mathbf {s}_2\) for Si and N respectively. c Mean absolute difference of exit wave between each iteration. d Progression of the aberration correction algorithm, showing amplitude and phase of current corrected exit wave and symmetrized exit waves, phase difference of the Fourier transforms, and the fitted aberration function \(\chi (q_x,q_y)\) with an outer radius of 15 nm\(^{-1}\) a Aberration-corrected exit wave amplitude of the Si\(_3\)N\(_4\) sample. b, c Peak positions labeled in a plotted relative to positions inside the ideal unit cell for best-fit peaks before and after aberration correction respectively. All peak displacements from the ideal positions are scaled by a factor of 4 for the plotting, numbers indicate the RMS displacement in picometers. d, e Bond lengths from almost all peaks measured in image from before and after aberration-corrected images respectively After measuring the residual aberrations from a small reference region, shown in Fig. 4, we have corrected these aberrations on the full image and plotted the amplitude in Fig. 5. The atomic positions appear extremely sharp, and no defects are visible other than the vacuum at the edge of the wedge sample. From multislice simulations we estimate the thickness of the crystalline portion of this sample ranges from 3 to 7 nm. To quantify the atomic column positions, we used nonlinear least squares to fit the peak positions using a complex, two-dimensional elliptic Gaussian function. For the Si-N dumbbells, two complex elliptic Gaussian functions are fitted simultaneously. The fitted peak positions relative to the ideal Si\(_3\)N\(_4\) lattice positions for a subset of 180 of the peaks are plotted in Fig. 5b, c from the exit waves before and after aberration correction. From the root-mean-square (RMS) displacements plotted, we see that the aberration correction has improved the fitting precision on most of the lattice sites. In particular, the dumbbells with strongly-overlapping peak functions have improved substantially, reaching peak precisions as low as 1.1 and 1.4 pm for the Si and N sites respectively. The isolated 2a N site position precision is not strongly affected by the residual aberrations. Returning to the original question of measuring atomic shifts due to the doping, we have plotted the bond length distributions of all nearest-neighbor sites that are more than 2 unit cells distance from the vacuum edge and the edge of the full micrograph, in Fig. 5d, e. Before aberration correction, the bond length distribution for the dumbbell Si-N and the 2a N site –Si bonds appears to follow a bimodal distribution. The larger Si-N bond spacing in the hexagonal rings is even more distorted, spreading over approximately 50 pm. However after correcting the residual aberrations, all bond length distributions become monomodal. Therefore we found no evidence of systematic shifts in the 2a N sites. Additionally, no local distortions of the \(\upbeta\)-Si\(_3\)N\(_4\) lattice such as those described in [39] were observed in this experiments. Finally, we note that because the reference lattice contains 14 site locations in each unit cell where measurements are taken, it is highly unlikely that we could be forcing one of the sites (such as a systematic 2a distortion) to be at an incorrect location. The algorithm should select the phase plate function which best minimizes the global aberrations. Single-layer graphene grain boundary The Si\(_3\)N\(_4\) sample analyzed in the previous section did not contain any lattice defects, making it a relatively easy test of the correction algorithm described in this paper. A better test of our method would be a crystalline sample that contains a large number of possible local bond angles and lengths, allowing for many possible measurement errors due to residual aberrations. One such sample is the incommensurate grain boundaries found in polycrystalline single-layer graphene [32, 40, 41]. Figure 6a shows the exit wave phase of a graphene grain boundary with a large field of view after applying the aberration correction described in this work, with the aberration function inset and the numerical coefficients given in the "Appendix". Topological variation in the sample has created regions where amorphous carbon can collect on the sample surface, but the majority of the field of view shows clean, defect-free graphene. Near the center of the field of view, an incommensurate boundary runs vertically. a Phase of an aberration-corrected exit wave of single-layer graphene, containing a grain boundary. Fitted aberration function is shown inset with an outer radius of 8 nm\(^{-1}\). b, c Enlarged view of the unobstructed boundary before and after residual aberration correction respectively The phase of the unobstructed region of the graphene grain boundary is plotted in Fig. 6b, c for the uncorrected and corrected exit waves respectively. Before aberration correction, we observe that the graphene lattices are extremely regular, but contain very little interpretable information. The grain boundary is particularly messy, due to the complex interaction of non-radially symmetric residual aberrations with the various atomic spacings present. By contrast, the corrected phase image in Fig. 6c has very well resolved atomic sites both in the crystalline lattices and along the grain boundary. Almost every site can be identified and the boundary structure can be easily quantified. We have used focal series exit wave reconstruction and the aberration correction algorithm described in this paper to characterize the structure of many different single-layer graphene grain boundary misorientations [42–44]. Off-axis hologram of a silicon wedge The experimental exit waves in the previous two sections were reconstructed from focal series. Focal series reconstruction has a well-known limitation that it cannot accurately reconstruct lower spatial frequencies [12, 16]. This leads to exit wave phase images in the reconstructions that are somewhat flatter (lower peak-to-peak range) than the true exit wave phases. By contrast, since off-axis holography uses a reference wave created by an electron biprism, it can measure the absolute phase of an exit wave [17, 18]. In order to test our method on an exit wave containing the full range of spatial frequencies, we have recorded an off-axis hologram of a silicon wedge with an [011] orientation. The phase of this reconstructed exit wave is plotted in Fig. 7a. We have then applied our residual aberration correction algorithm to this measurement, shown in Fig. 7b. The numerical aberration coefficients are given in the "Appendix". a, b Phase images of an off-axis hologram measurement of a Si sample with an [011] orientation, for the uncorrected and corrected exit waves respectively. Fitted aberration function is shown inset in b with an outer radius of 12 nm\(^{-1}\). c, d Line traces taken from a and b respectively The range of phases measured in these images is substantially higher than those in the previous focal series measurements, almost \(2 \pi\) along the thinnest edge of the sample. After aberration correction, the Si dumbbells are more cleanly resolved. To show the dumbbell structure more clearly, we have plotted line traces in Fig. 7c, d, for the uncorrected and corrected phase images respectively. After correction, almost all dumbbells show clear separation between the two Si atomic columns. We have developed an algorithm for measuring and correcting residual coherent wave aberrations in complex exit waves of crystalline samples, measured in transmission electron microscopy. Our algorithm relies on creating a synthetic exit wave by applying point-symmetrization to all atomic columns in a reference region, to approximate the aberration-free exit wave. Because our method is objective and automatic, it is not prone to operator errors that could be introduced from manual correction of the residual aberrations. It is important to note that no symmetrization is applied to the final experimental exit wave. We have applied our method to three experimental datasets, focal series reconstructions of a Si\(_3\)N\(_4\) wedge and a single-layer graphene grain boundary, and an off-axis hologram of a silicon wedge. In all cases, the residual aberration correction improved the precision, accuracy and interpretability of the complex exit waves. Our algorithm is simple to implement, and applicable to a large class of experimental exit wave measurements of crystalline samples oriented along a low-index zone axis. Smith, D., Arslan, I., Bleloch, A., Stach, E., Browning, N., Batson, P., Batson, P., Dellby, N., Krivanek, O., Blom, D., et al.: Development of aberration-corrected electron microscopy. Microsc. Microanal. 14(1), 2–15 (2008) Hawkes, P.: Aberration correction past and present. Philos. Trans. A. Math. Phys. Eng. Sci. 367(1903), 3637–3664 (2009) Dahmen, U., Erni, R., Radmilovic, V., Ksielowski, C., Rossell, M.D., Denes, P.: Background, status and future of the transmission electron aberration-corrected microscope project. Philos. Trans. A. Math. Phys. Eng. Sci. 367(1903), 3795–3808 (2009) Linck, M., Hartel, P., Uhlemann, S., Kahl, F., Müller, H., Zach, J., Haider, M., Niestadt, M., Bischoff, M., Biskupek, J., et al.: Chromatic aberration correction for atomic resolution tem imaging from 20 to 80 kv. Phys. Rev. Lett. 117(7), 076101 (2016) Zemlin, F., Weiss, K., Schiske, P., Kunath, W., Herrmann, K.: Coma-free alignment of high resolution electron microscopes with the aid of optical diffractograms. Ultramicroscopy. 3, 49–60 (1978) Typke, D., Dierksen, K.: Determination of image aberrations in high-resolution electron microscopy using diffractogram and cross-correlation methods. Optik. 99(4), 155–166 (1995) Uhlemann, S., Haider, M.: Residual wave aberrations in the first spherical aberration corrected transmission electron microscope. Ultramicroscopy. 72(3), 109–119 (1998) Kirkland, A., Meyer, R., Chang, L.: Local measurement and computational refinement of aberrations for HRTEM. Microsc. Microanal. 12(6), 461–468 (2006) Schramm, S., Van der Molen, S., Tromp, R.: Intrinsic instability of aberration-corrected electron microscopes. Phys. Rev. Lett. 109, 163901 (2012) Gerchberg, R.W., Saxton, W.O.: A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik. 35, 237 (1972) Allen, L.J., McBride, W., O'Leary, N.L., Oxley, M.P.: Exit wave reconstruction at atomic resolution. Ultramicroscopy. 100(1–2), 91–104 (2004) Ophus, C., Ewalds, T.: Guidelines for quantitative reconstruction of complex exit waves in HRTEM. Ultramicroscopy. 113, 88–95 (2011) Koch, C.T.: Towards full-resolution inline electron holography. Micron 63, 69–75 (2014) Ozsoy-Keskinbora, C., Boothroyd, C., Dunin-Borkowski, R., van Aken, P., Koch, C.: Hybridization approach to in-line and off-axis (electron) holography for superior resolution and phase sensitivity. Sci. Rep. 4 (2014) Kirkland, E.J.: Computation in electron microscopy. Acta. Crystallogr. A. Found. Adv. 72(1), 1–27 (2016) Parvizi, A., Van den Broek, W., Koch, C.T.: Recovering low spatial frequencies in wavefront sensing based on intensity measurements. Adv. Struct. Chem. Imag. 2(1), 1–9 (2017) Lichte, H., Formanek, P., Lenk, A., Linck, M., Matzeck, C., Lehmann, M., Simon, P.: Electron holography: applications to materials questions. Annu. Rev. Mater. Res. 37, 539–588 (2007) Linck, M., Freitag, B., Kujawa, S., Lehmann, M., Niermann, T.: State of the art in atomic resolution off-axis electron holography. Ultramicroscopy. 116, 13–23 (2012) Thust, A., Overwijk, M.H.F., Coene, W.M.J., Lentzen, M.: Numerical correction of lens aberrations in phase-retrieval HRTEM. Ultramicroscopy. 64(1–4), 249–264 (1996) Danev, R., Nagayama, K.: Transmission electron microscopy with Zernike phase plate. Ultramicroscopy. 88(4), 243–252 (2001) Barthel, J., Thust, A.: Aberration measurement in HRTEM: Implementation and diagnostic use of numerical procedures for the highly precise recognition of diffractogram patterns. Ultramicroscopy. 111(1), 27–46 (2010) Vulović, M., Franken, E., Ravelli, R., Van Vliet, J., Rieger, B.: Precise and unbiased estimation of astigmatism and defocus in transmission electron microscopy. Ultramicroscopy. 116, 115–134 (2012) Texier, M., Thibault-Pénisson, J.: Optimum correction conditions for aberration-corrected hrtem sic dumbbells chemical imaging. Micron. 43(4), 516–523 (2012) Tang, D., Zandbergen, H., Jansen, J., Op de Beeck, M., Van Dyck, D.: Fine-tuning of the focal residue in exit-wave reconstruction. Ultramicroscopy. 64(1), 265–276 (1996) Stenkamp, D.: Detection and quantitative assessment of image aberrations from single HRTEM lattice images. J. Microsc. 190(1–2), 194–203 (1998) Lin, J.A., Cowley, J.M.: Calibration of the operating parameters for an HB5 STEM instrument. Ultramicroscopy. 19(1), 31–42 (1986) Boothroyd, C.B.: Quantification of energy filtered lattice images and coherent convergent beam patterns. Scan. Microsc. 11, 31 (1997) Lupini, A.R., Pennycook, S.J.: Rapid autotuning for crystalline specimens from an inline hologram. J. Electron. Microsc. 57(6), 195–201 (2008) Kirkland, E.: Advanced computing in electron microscopy (2010) Van den Broek, W., Jiang, X., Koch, C.: FDES, a gpu-based multislice algorithm with increased efficiency of the computation of the projected potential. Ultramicroscopy. 158, 89–97 (2015) Krivanek, O., Dellby, N., Keyse, R., Murfitt, M., Own, C., Szilagyi, Z.: Advances in aberration-corrected scanning transmission electron microscopy and electron energy-loss spectroscopy. Adv. Imag. Electron. Phys. 153, 121–160 (2008) Rasool, H.I., Ophus, C., Klug, W.S., Zettl, A., Gimzewski, J.K.: Measurement of the intrinsic strength of crystalline and polycrystalline graphene. Nat. Commun. 4 (2013) Schmid, A., Duden, T., Olson, E., Donchev, T., Petrov, I.: In-column piezo-stages and experimental opportunities. Microsc. Microanal. 13(S02), 1158–1159 (2007) Guizar-Sicairos, M., Thurman, S., Fienup, J.: Efficient subpixel image registration algorithms. Opt. Lett. 33(2), 156–158 (2008) Fang, C.M., Metselaar, R.: Site preferences in \(\beta\)-SiAlON from first-principles calculations. J. Mat. Chem. 13(2), 335–337 (2003) Khvatinskaya, D.Y., Loryan, V.E., Smirnov, K.L.: A neutron-diffraction study on the structure of Beta'-SiAlON. Inorg. Mat. 27(10), 1805–1807 (1991) Thorel, A., Ciston, J., Bartel, T., Song, C.-Y., Dahmen, U.: Observation of the atomic structure of beta'-SiAlON using three generations of high resolution electron microscopes. Philos. Mag. 93(10–12), 1172–1181 (2013) Smrčok, L., Salamon, D., Scholtzová Jr., E., Richardson, J.W.: Time-of-flight Rietveld neutron structure refinement and quantum chemistry study of y-\(\alpha\)-SiAlON. J. Euro. Cer. Soc. 26(16), 3925–3931 (2006) Kim, H.S., Zhang, Z., Kaiser, U.: Local symmetry breaking of a thin crystal structure of \(\beta\)-Si\(_3\)N\(_4\) as revealed by spherical aberration corrected high-resolution transmission electron microscopy images. J. Electron. Microsc. 61(3), 145–157 (2012) Huang, P.Y., Ruiz-Vargas, C.S., van der Zande, A.M., Whitney, W.S., Levendorf, M.P., Kevek, J.W., Garg, S., Alden, J.S., Hustedt, C.J., Zhu, Y., et al.: Grains and grain boundaries in single-layer graphene atomic patchwork quilts. Nature 469(7330), 389–392 (2011) Robertson, A.W., Warner, J.H.: Atomic resolution imaging of graphene by transmission electron microscopy. Nanoscale 5(10), 4079–4093 (2013) Rasool, H.I., Ophus, C., Zhang, Z., Crommie, M.F., Yakobson, B.I., Zettl, A.: Conserved atomic bonding sequences and strain organization of graphene grain boundaries. Nano Lett 14(12), 7057–7063 (2014) Ophus, C., Shekhawat, A., Rasool, H., Zettl, A.: Large-scale experimental and theoretical study of graphene grain boundary structures. Phys. Rev. B. 92(20), 205402 (2015) Shekhawat, A., Ophus, C., Ritchie, R.O.: A generalized read-shockley model and large scale simulations for the energy and structure of graphene grain boundaries. RSC. Adv. 6(50), 44489–44497 (2016) JC and HR recorded the focal series from the SiAlON and graphene samples respectively. ML recorded and reconstructed the silicon wedge off-axis hologram. CO reconstructed the exit waves, developed the aberration correction algorithm, applied it to the samples used in this study and performed the analyses. All authors contributed to writing the manuscript. All authors read and approved the final manuscript. We thank Alain Thorel for providing the SiAlON sample, and Marissa Libbee for preparing a SiAlON wedge. CO thanks Christoph Koch for helpful discussions. We also thank Cory Czarnik for assistance with the Gatan OneView electron detector. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. HIR and AZ acknowledge support in part by the Director, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, of the U.S. Department of Energy under Contract DE-AC02-05CH11231, within the sp\(^2\)-bonded Materials Program, which provided for detailed TEM characterization. National Center for Electron Microscopy, Molecular Foundry, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, USA Colin Ophus & Jim Ciston Department of Physics, University of California Berkeley, 366 LeConte Hall, Berkeley, MC, 7300, USA Haider I Rasool & Alex Zettl Materials Science Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, USA Corrected Electron Optical Systems GmbH, Englerstrasse 28, 69126, Heidelberg, Germany Martin Linck Search for Colin Ophus in: Search for Haider I Rasool in: Search for Martin Linck in: Search for Alex Zettl in: Search for Jim Ciston in: Correspondence to Colin Ophus. The numerical aberration coefficients found in this study are given in Table 1, using the convention used by CEOS aberration correctors [7]. In all images, the origin is in the upper left corner and the x-axis points down while the y-axis points right. Table 1 Numerical aberration coefficients measured in this study for the Si\(_3\)N\(_4\), graphene grain boundary and silicon wedge experimental datasets Ophus, C., Rasool, H.I., Linck, M. et al. Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples. Adv Struct Chem Imag 2, 15 (2016) doi:10.1186/s40679-016-0030-1 Atomic resolution HRTEM Aberration correction Off-axis holography Wavefront sensing
CommonCrawl
The origin is not in the convex hull $\Rightarrow$ the set lies in a hemisphere? I am trying to understand the proof of the following claim: Let $f:A \subseteq \mathbb{S}^n \to \mathbb{S}^n$ be an $L$-Lipschitz* map (with $L <1$). Then $f(A)$ is contained in the interior of a hemisphere. *The distance on $\mathbb{S}^n$ can be either the intrinsic one or the extrinsic (Euclidean) one, it does not matter. In the standard proof I have found here (pg 7, lemma 2.8), the author shows that the convex hull of $f(A)$ in the unit ball of $\mathbb{R}^{n+1}$ does not contain the origin. Why is this enough to deduce the claim? As commented by Jyrki Lahtonen, if a convex hull of a set $B$ is closed, then it is the intersection of all closed half-spaces containing $B$. From this the following key-lemma follows: Lemma: A closed convex set in $\mathbb{R}^n$ which does not contain the origin $\bar O$ is contained in a closed half-space which does not contain $\bar O$. In particular, $B$ is contained in the interior of a half-space whose boundary does contain $\bar O$. Proof: Suppose $B \subseteq \mathbb{R}^{n}$ is closed and convex. $$\bar O \notin B=\operatorname{Conv}(B)=\cap_{\text{half-space } \, H, B \subseteq H} H$$ so there exist a closed half-space $H$ containing $B$ which does not contain the origin. $H$ can be written in the form of $H:=\{ y \in \mathbb{R}^{n}| \langle y,v \rangle \ge c\}$ where $v \in \mathbb{R}^{n},c \in \mathbb{R}$. $\bar O \notin H \Rightarrow c>0$. Thus, $$ B \subseteq \{ y \in \mathbb{R}^{n}| \langle y,v \rangle \ge c > 0\} $$ is contained in the interior of the half-space $D=\{ y \in \mathbb{R}^{n}| \langle y,v \rangle \ge 0\}, \bar O \in \operatorname{bd} D$. Note: The lemma is false if $B$ is not closed. Take for instance $$B=\{ (x,y)\in \mathbb{R}^2 \,| \, x \ge 0, y \ge 0 \} \setminus {(0,0)}$$ $B$ is convex, $\bar O \notin B$, but any closed half-space containing $B$, must contain $\bar O$. However, $B$ is contained in the interior of a half-space whose boundary does contain $\bar O$, the halh-space $\{ (x,y)\in \mathbb{R}^2 \,| \, x + y \ge 0 \}$. The lemma implies the claim for when $f(A)$ is compact: This still leaves open the question whether or not, the claim holds in more general case. First, observe that compactness of $f(A)$ implies closedness of $\operatorname{Conv}(f(A))$. So, by the lemma there exist a closed half-space $H$ containing $f(A)$ which does not contain the origin. $H$ can be written in the form of $H:=\{ y \in \mathbb{R}^{n+1}| \langle y,v \rangle \ge c > 0\}$ where $v \in \mathbb{R}^{n+1},c \in \mathbb{R}$. $$ f(A) \subseteq \{ y \in \mathbb{S}^n| \langle y,v \rangle \ge c > 0\} $$ is contained in the interior of a hemisphere $D=\{ y \in \mathbb{S}^n| \langle y,v \rangle \ge 0\} $, as we wanted. geometry lipschitz-functions convex-geometry convex-hulls Asaf Shachar Asaf ShacharAsaf Shachar $\begingroup$ Isn't the convex hull the intersection of the half-spaces containing the set $f(A)$? If the origin is not in the convex hull, then there exists a half-space containing $f(A)$ but not the origin, no? By a half-space I mean one side of any hyperplane - not necessarily thru the origin. IOW, the set of solutions of a linear inequality. $\endgroup$ – Jyrki Lahtonen Feb 26 '17 at 13:06 $\begingroup$ Thanks! It seems that you are essentially right. Convex hulls which are closed sets, indeed can be represented as intersection of all closed half-spaces containing the original set. So, it still remains to understand what are the most general conditions under which the theorem holds. I will edit the question to make this more clear. $\endgroup$ – Asaf Shachar Feb 26 '17 at 13:23 Well, it turns out the theorem indeed holds in the general. For closed subsets $A$, the proof is outlined in the question. If $A$ is not closed, one can always extend $f$ (uniquely) to be defined on the closure of $A$, while preserving the same Lipschitz constant (this is because the co-domain is complete). Now apply the result to that extension. Not the answer you're looking for? Browse other questions tagged geometry lipschitz-functions convex-geometry convex-hulls or ask your own question. The image of every strictly non-expanding map between spheres is contained in an open hemisphere? Convex hull of rotation matrices is closed and contains the origin Proof of closure, convex hull and minimal cone of dual set Counterexample: Convex set which is NOT the intersection of half-spaces Convex hull of the union of compact and convex sets
CommonCrawl
en:stabilization [ОрбиКрафт 3D] ОрбиКрафт 3D What is Orbicraft for? Work without earth-based VHF Lesson 01. Install the software Lesson 02. Introducing sensors Lesson 03. OrbiCraft Educational Kit's Assembling Lesson 04. Stabilization mode Lesson 05. Magnetometer calibration. Raw data acquisition Lesson 06. Magnetometer Calibration Lesson 07. OrbiCraft positioning with magnetometer Houston Control Center Software Houston CC Application SW Interface HOUSTON SERVER SW Work with earth-based VHF "Houston" SW interface Test of the earth-based VHF TRX Using sensors Reaction wheel Solar sensors Arduino-Based payload Arduino module settings Arduino module base lesson Orbicraft gimbal Terra Integrated Space Environment Simulator Complex en:stabilization How to implement stabilization mode Analogies between translational and rotational motion Derivation of ratio for required angular velocity of reaction wheel Python implementation Sample of Python code using the formula: Example of complete code of the satellite stabilization program in Python: Satellite stabilization mode means maintaining a zero angular velocity. This mode is necessary, for example, to obtain clear images or transfer them to a ground receiving point, when the data transmission time is long and the satellite antenna is not allowed to deviate from the ground receiving point. The theory described in this lesson is also suitable for maintaining any desired angular velocity (not only zero velocity), and for such tasks as tracking a moving object. You can change the satellite's angular velocity using reaction wheels, jet engines, electromagnetic coils, and gyrodyne engines. In this example we consider the control over the control moment using the reaction wheel. The action of this device is based on the Law of conservation of angular momentum. For example, when the reaction wheel engine spins in one direction, the spacecraft (SC), respectively, begins to rotate in the other direction. It happens under the action of the same unwinding moment, but directed in the opposite side in accordance with the Newton's Third Law. If, under the influence of external factors, the spacecraft begins to turn in a certain direction, it is enough to increase the rotation speed of the reaction wheel in the same direction. So, the unwanted rotation of the spacecraft will stop because the reaction wheel will "take" the rotational moment instead of the satellite. . The information about the angular velocity of the satellite will be received by use of angular velocity sensor. In this example, we consider how to calculate control commands for the reaction wheel from the indications of the angular velocity sensor and data on the speed of the reaction wheel. It is needed for the satellite to stabilize or maintain the required angular velocity The analogue of the Law of conservation of momentum for rotational motion is the Law of conservation of angular momentum or the Law of conservation of kinetic momentum: $\sum\limits_{i=1}^{n}{{{J}_{i}}\cdot {{\omega }_{i}}}=const \label{eq:1}$ In general, the rotational motion of a satellite is described by laws similar to thosefor translational motion. For example, for each parameter in the translational motion there is a similar parameter for the rotational motion: Translational motion Force $F\leftrightarrow M$ Momentum Distance $S\leftrightarrow \alpha$ Angle Speed $V\leftrightarrow\omega$ Angular velocity Acceleration $a\leftrightarrow\epsilon$ Angular acceleration Weight $m\leftrightarrow J$ Moment of inertia The laws of motion also look similar. Title of law Newton's second law $F=m\cdot a$ $M=J\cdot \epsilon$ kinetic energy $E=\frac{m\cdot {{V}^{2}}}{2}$ $E=\frac{J\cdot {{\omega}^{2}}}{2}$ law of momentum conservation $\sum\limits_{i=1}^{n}{{{m}_{i}}\cdot {{V }_{i}}}=const$ $\sum\limits_{i=1}^{n}{{{J}_{i}}\cdot {{\omega }_{i}}}=const$ Let us write the law of conservation of kinetic moment of the system 'satellite + reaction wheel' for the moments of time "1" и "2": ${{J}_{s}}\cdot {{\omega }_{s1}}+{{J}_{m}}\cdot {{\omega }_{m1}}={{J}_{s}}\cdot {{\omega }_{s2}}+{{J}_{m}}\cdot {{\omega }_{m2}}$ The absolute speed of the reaction wheel, i.e. the reaction wheel speed in an inertial coordinate system (for example, associated with the Earth) is the sum of the satellite angular velocity and the angular velocity of the reaction wheel relative to the satellite, i.e. reaction wheel angular velocity: ${{\omega }_{mi}}={{\omega }_{si}}+{{{\omega }'}_{mi}}$ Please note: the reaction wheel can measure its own angular velocity relative to the satellite body or relative angular velocity. Let usxpress the desired speed of the reaction wheel which must be set ${{J}_{s}}\cdot {{\omega }_{s1}}+{{J}_{m}}\cdot \left( {{\omega }_{s1}}+{{{{\omega }'}}_{m1}} \right)={{J}_{s}}\cdot {{\omega }_{s2}}+{{J}_{m}}\cdot \left( {{\omega }_{s2}}+{{{{\omega }'}}_{m2}} \right) $ $ \left( {{J}_{s}}+{{J}_{m}} \right)\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right)=-{{J}_{m}}({{\omega }_{m1}}-{{\omega }_{m2}}) $ $ {{\omega }_{m2}}={{\omega }_{m1}}+\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right) $ Denote the relation $\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}$ as $k_d$. Operation of the algorithm does not require the exact value of $\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}$ because the reaction wheel cannot instantly set the required angular velocity. Also, the precision of measurements is not absolute: the satellite's angular velocity measured with an angular velocity sensor is not accurate, since measurements always contain measurement noise. Note that measurement of the angular velocity and command issuing to the reaction wheel occur with some minimum time step. All these limitations lead to the fact that $k_d$ should be experimentally selected. If it does not work we build detailed computer models which take into account all the above limitations. In our case, the coefficient $k_d$ will be selected experimentally. $ {{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right) $ The angular velocity $\omega_{s2}$ at time "2" is the target angular velocity; we denote it by $\omega_{s\_goal}$. Thus, if the satellite is supposed to maintain the angular velocity $\omega_{s\_goal}$, then knowing the current angular velocity of the satellite and the current angular velocity of the reaction wheel, it is possible to calculate the desired velocity of the reaction wheel to maintain the "rotation with constant speed" mode: ${{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\left( {{\omega }_{s1}}-{{\omega }_{{s\_goal}}} \right)$ Using the rotation mode with a constant speed, it is possible to make the satellite turn at any angle if the satellite is rotated at a constant speed for a certain time. Then the time that the satellite needs to rotate at a constant speed $\omega_{s\_goal}$ to turn to the required angle $\alpha$ is determined by dividing these values: $t=\frac{\alpha}{\omega_{{s\_goal}}}$ If the satellite is required to be stabilized, then $\omega_{s\_goal}=0$ and the expression becomes simpler: ${{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\cdot {{\omega }_{s1}}$ # request for angular velocity sensor (AVS) and reaction wheel data hyro_state, gx_raw, gy_raw, gz_raw = hyro_request_raw(hyr_num) mtr_state, mtr_speed = motor_request_speed(mtr_num) # conversion of angular velocity values in degrees/sec gx_degs = gx_raw * 0.00875 # if AVS is set up with the z axis, then the angular velocity # of the satellite coincides with the readings of the AVS along the z axis, otherwise # it is necessary to change the sign: omega = - gz_degs omega = gz_degs mtr_new_speed = int(mtr_speed+ kd*(omega-omega_goal)) # Differential feedback coefficient. # The coefficient is positive if the reaction wheel is located with the z axis up # and AVS is also z-axis up. # The coefficient is chosen experimentally, depending on the form # and the mass of your satellite. kd = 200.0 # The time step of the algorithm, sec time_step = 0.2 # Target satellite angular velocity, degrees/sec # For stabilization mode is equal to 0.0 degrees/sec omega_goal = 0.0 # reaction heel number mtr_num = 1 # Maximum allowed reaction wheel speed, rpm mtr_max_speed = 4000 # Number of AVS (angular velocity sensor) hyr_num = 1 # Functions for determining the new reaction wheel speed. # New reaction heel speed is made up of # current reaction wheel speed and speed increments. # Incrementing speed in proportion to angle error # and error in angular velocity. # mtr_speed - reaction wheel current angular speed, rpm # omega - current satellite angular velocity, degrees/sec # omega_goal - target angular velocity of the satellite, degrees/sec # mtr_new_speed - required angular velocity of the reaction wheel, rpm def motor_new_speed_PD(mtr_speed, omega, omega_goal): mtr_new_speed = int(mtr_speed + kd*(omega-omega_goal) if mtr_new_speed > mtr_max_speed: mtr_new_speed = mtr_max_speed elif mtr_new_speed < -mtr_max_speed: mtr_new_speed = -mtr_max_speed return mtr_new_speed # The function includes all devices # to be used in the main program. def initialize_all(): print "Enable motor №", mtr_num motor_turn_on(mtr_num) print "Enable angular velocity sensor №", hyr_num hyro_turn_on(hyr_num) # The function disables all devices def switch_off_all(): print "Finishing..." print "Disable angular velocity sensor №", hyr_num hyro_turn_off(hyr_num) motor_set_speed(mtr_num, 0) motor_turn_off(mtr_num) print "Finish program" # The main function of the program in which remaining functions are called up. def control(): initialize_all() # Initialize flywheel status mtr_state = 0 # Initialize the status of the AVS hyro_state = 0 for i in range(1000): print "i = ", i # Аngular speed sensor (AVS) and reaction wheel requests. # Processing the readings of the angular velocity sensor (AVS), # calculation of the satellite angular velocity. # If the error code of the AVS is 0, i.e. there is no error if not hyro_state: gy_degs = gy_raw * 0.00875 gz_degs = gz_raw * 0.00875 print "gx_degs =", gx_degs, \ "gy_degs =", gy_degs, "gz_degs =", gz_degs elif hyro_state == 1: print "Fail because of access error, check the connection" print "Fail because of interface error, check your code" # Processing the reaction wheel and setting the target angular velocity. if not mtr_state: # if the error code is 0, i.e. no error print "Motor_speed: ", mtr_speed # setting of new reaction wheel speed mtr_new_speed = motor_new_speed_PD(mtr_speed,omega,omega_goal) motor_set_speed(mtr_num, mtr_new_speed) sleep(time_step) switch_off_all() 1. Change the program so that the satellite rotates at a constant speed. 2. Change the program so that the satellite works according to the flight timeline: * stabilization within 10 seconds * 180 degree rotation in 30 seconds 3. Rewrite the program in C and get it working. en/stabilization.txt · Last modified: 2020/08/31 13:57 by golikov
CommonCrawl